The yield of the deep sub-micron semiconductor is secured by the process capability as well as the yield-friendly design capability. Yield-friendly design capabilities can be equipped with conventional Design for Manufacturability (DFM) that avoids already known defective layouts in design. Previously known defects can be defined as various rules and avoided in design, but defects that may occur at new technology nodes are difficult to avoid in advance. Indiscreetly defect-avoidance designs cause turn TAT increases and Power/Performance/Area (PPA) overheads in the design, which can ultimately lead to increased design costs and poor design competitiveness. The first step of this study is to predict potential risks and to specify major factor of risks that may occur at new process nodes with new DFM solutions developed using Machine Learning (ML) techniques. The second step is to secure early yield through avoidance design to prevent predicted defects and direct mask modification to improve defects. In this study, we present not only the introduction of new ML-based DFM solutions, but also the effect of predicting and improving defects through the application cases of real products.
Process and reliability risks have become critically important during mass production at advanced technology nodes even with Extreme Ultraviolet Lithography (EUV) illumination. In this work, we propose a design-for-manufacturability solution using a set of new rules to detect high risk design layout patterns. The proposed methods improve design margins while avoiding area overhead and complex design restrictions. In addition, the proposed method introduces an in-design pattern replacement with automatically generated fixing hints to improve all matched locations with identified patterns.
Continuous scaling of CMOS process technology to 7nm (and below) has introduced new constraints and challenges in determining Design-for-Yield (DFY) solutions. In this work, traditional solutions such as improvements in redundancy and in compensating target designs for low process window margins are extended to meet the additional constraints of complex 7nm design rules. Experiments conducted on 7nm industrial designs demonstrate that the proposed solution achieves 9.1%-41% redundant-via-rate improvements while ensuring all 7nm design rule constraints are met.
As the typical litho hotspot detection runtime continue to increase with sub-10nm technology node due to increasing design and process complexity, many DFM techniques are exploring new methods that can expedite some of their advanced verification processes. The benefit of improved runtimes through simulation can be obtained by reducing the amount of data being sent to simulation. By inserting a pattern matching operation, a system can be designed such that it only simulates in the vicinity of topologies that somewhat resemble hotspots while ignoring all other data. Pattern Matching improved overall runtime significantly. However, pattern matching techniques require a library of accumulated known litho hotspots in allowed accuracy rate. In this paper, we present a fast and accurate litho hotspot detection methodology using specialized machine learning. We built a deep neural network with training from real hotspot candidates. Experimental results demonstrate Machine Learning’s ability to predict hotspots and achieve greater than 90% detection accuracy and coverage, with best achieved accuracy 99.9% while reducing overall runtime compared to full litho simulation.
Achieving lithographic printability at advanced nodes (14nm and beyond) can impose significant restrictions on physical design, including large numbers of complex design rule checks (DRC) and compute-intensive detailed process model checking. Early identifying of yield-limiter hotspots is essential for both foundries and designers to significantly improve process maturity. A real challenge is to scan the design space to identify hotspots, and decide the proper course of action regarding each hotspot. Building a scored pattern library with real candidates for hotspots for both foundries and designers is of great value. Foundries are looking for the most used patterns to optimize their technology for and identify patterns that should be forbidden, while designers are looking for the patterns that are sensitive to their neighboring context to perform lithographic simulation with their context to decide if they are hotspots or not.[1] In this paper we propose a framework to data mine designs to obtain set of representative patterns of each design, our aim is to sample the designs at locations that can be potential yield limiting. Though our aim is to keep the total number of patterns as small as possible to limit the complexity, still the designer is free to generate layouts results in several million of patterns that define the whole design space. In order to handle the large number of patterns that represent the design building block constructs, we need to prioritize the patterns according to their importance. The proposed pattern classification methodology depends on giving scores to each pattern according to the severity of hotspots they cause, the probability of their presence in the design and the likelihood of causing a hotspot. The paper also shows how the scoring scheme helps foundries to optimize their master pattern libraries and priorities their efforts in 14nm technology and beyond. Moreover, the paper demonstrates how the hotspot scoring helps in improving the runtime of lithographic simulation verification by identifying which patterns need to be optimized to correctly describe candidate hotspots, so that only potential problematic patterns are simulated.
As technology nodes scale beyond 20nm node, design complexity increases and printability issues become more critical and hard for RET techniques to fix. It is now mandatory for designers to run lithography checks prior to tape out and acceptance by the foundry. As lithography compliance became a sign-off criterion, lithography hotspots are increasingly treated like DRC violations. In the case of lithography hotspot, layout edges that should be moved to fix the hotspot are not necessarily the edges directly touching it. As a result of that, providing the designer with a suggested layout movements to fix the lithography hotspot is becoming a necessity. Software solutions generating hints should be accurate and fast. In this paper we are presenting a methodology for providing hints to the designers to fix Litho-hotspots in the 20nm and beyond.
KEYWORDS: Semiconducting wafers, Metals, Process modeling, Image processing, Design for manufacturing, Chemical mechanical planarization, Optical proximity correction, 3D modeling, Photovoltaics, Scanning electron microscopy
As a result, low fidelity patterns due to process variations can be detected and eventually corrected by designers as early
in the tape out flow as right after design rule checking (DRC); a step no longer capable to totally account for process
constraints anymore. This flow has proven to provide a more adequate level of accuracy when correlating systematic
defects as seen on wafer with those identified through LFD simulations. However, at the 32nm and below, still distorted
patterns caused by process variation are unavoidable. And, given the current state of the defect inspection metrology
tools, these pattern failures are becoming more challenging to detect. In the framework of this paper, a methodology of
advanced process window simulations with awareness of chip topology is presented. This method identifies the expected
focal range different areas within a design would encounter due to different topology.
As patterning for advanced processes becomes more challenging, designs must become more process-aware. The
conventional approach of running lithography simulation on designs to detect process hotspots is prohibitive in terms of
runtime for designers, and also requires the release of highly confidential process information. Therefore, a more
practical approach is required to make the In-Design process-aware methodology more affordable in terms of
maintenance, confidentiality, and runtime. In this study, a pattern-based approach is chosen for Process Hotspot Repair
(PHR) because it accurately captures the manufacturability challenges without releasing sensitive process information.
Moreover, the pattern-based approach is fast and well integrated in the design flow. Further, this type of approach is very
easy to maintain and extend. Once a new process weak pattern has been discovered (caused by Chemical Mechanical
Polishing (CMP), etch, lithography, and other process steps), the pattern library can be quickly and easily updated and
released to check and fix subsequent designs.
This paper presents the pattern matching flow and discusses its advantages. It explains how a pattern library is created
from the process weak patterns found on silicon wafers. The paper also discusses the PHR flow that fixes process
hotspots in a design, specifically through the use of pattern matching and routing repair.
KEYWORDS: Design for manufacturing, Silicon, Polishing, Optical proximity correction, Back end of line, Metals, Failure analysis, Yield improvement, Logic, Manufacturing
A set of design for manufacturing (DFM) techniques have been developed and applied to 45nm, 32nm and 28nm logic
process technologies. A noble technology combined a number of potential confliction of DFM techniques into a
comprehensive solution. These techniques work in three phases for design optimization and one phase for silicon
diagnostics. In the DFM prevention phase, foundation IP such as standard cells, IO, and memory and P&R tech file are
optimized. In the DFM solution phase, which happens during ECO step, auto fixing of process weak patterns and
advanced RC extraction are performed. In the DFM polishing phase, post-layout tuning is done to improve
manufacturability. DFM analysis enables prioritization of random and systematic failures. The DFM technique presented
in this paper has been silicon-proven with three successful tape-outs in Samsung 32nm processes; about 5%
improvement in yield was achieved without any notable side effects. Visual inspection of silicon also confirmed the
positive effect of the DFM techniques.
In today's semiconductor industry, prior to wafer fabrication, it has become a desirable practice to scan layout designs
for lithography-induced defects using advanced process window simulations in conjunction with corresponding
manufacturing checks. This methodology has been proven to provide the highest level of accuracy when correlating
systematic defects found on the wafer with those identified through simulation. To date, when directly applying this
methodology at the full chip level, there has been unfavorable expenses incurred that are associated with simulation
which are currently overshadowing its primary benefit of accuracy - namely, long runtimes and the requirement for an
abundance of cpus. Considering the aforementioned, the industry has begun to lean towards a more practical application
for hotspot identification that revolves around topological pattern recognition in an attempt to sidestep the simulation
runtime. This solution can be much less costly when weighing against the negative runtime overhead of simulation. The
apparent benefits of pattern matching are, however, counterbalanced with a fundamental concern regarding detection
accuracy; topological pattern identification can only detect polygonal configurations, or some derivative of a
configuration, which have been previously identified. It is evident that both systems have their strengths and their
weaknesses, and that one system's strength is the other's weakness, and vice-versa.
A novel hotspot detection methodology that utilizes pattern matching combined with lithographic simulation will be
introduced. This system will attempt to minimize the negative aspects of both pattern matching and simulation. The
proposed methodology has a high potential to decrease the amount of processing time spent during simulation, to relax
the high cpu count requirement, and to maximize pattern matching accuracy by incorporating a multi-staged pattern
matching flow prior to performing simulation on a reduced data set. Also brought forth will be an original methodology
for constructing the core pattern set, or candidate hotspot library, in conjunction with establishing hotspot and coldspot
pattern libraries. Lastly, it will be conveyed how this system can automatically improve its potential as more designs are
passed through it.
As integrated circuit technology advances and features shrink, the scale of critical dimension (CD) variations induced by
lithography effects become comparable with the critical dimension of the design itself. At the same time, each
technology node requires tighter margins for errors introduced in the lithography process. Optical and process models --
the black boxes that simulate the pattern transfer onto silicon -- are becoming more and more concerned with those
different process errors. As a consequence, an optical proximity correction (OPC) model consists mainly of two parts; a
physical part dealing with the physics of light and its behavior through the lithographical patterning process, and an
empirical part to account for any process errors that might be introduced between writing the mask and sampling
measurements of patterns on wafer. Understanding how such errors can affect a model's stability and predictability, and
taking such errors into consideration while building a model, could actually help convergence, stability, and
predictability of the model when it comes to design patterns other than those used during model calibration and
verification. This paper explores one method to quickly enhance model accuracy and stability.
In microelectronics manufacturing, photolithography is the art of transferring pattern shapes printed on a mask to silicon
wafers by the use of special imaging systems. These imaging systems stopped reducing exposure wavelength at 193nm.
However, the industry demand for tighter design shapes and smaller structures on wafer has not stopped. To overcome
some of the restrictions associated with the photographic process, new methods for Resolution Enhancement Techniques
(RET) are being constantly explored and applied. An essential step in any RET method is Optical Proximity Correction
(OPC). In this process the edges of the target desired shapes are manipulated to compensate for light diffraction effects
and result in shapes on wafer as close as possible to the desired shapes. Manipulation of the shapes is always restricted
by Mask Rules Checks (MRCs). The MRCs are the rules that assure that the pattern coming out of OPC can be printed
on the mask without any catastrophic faults. Essential as they are, MRCs also place constrains on the solutions explored
by the OPC algorithms.
In this paper, an automated algorithm has been implemented to overcome MRC limitations to RET by decomposing the
original layout at the places where regular RET hit the MRC during OPC.This algorithm has been applied to test cases
where simulation results showed much better printability than the normal conventional solutions. This solution has also
been tested and verified on silicon.
Foundry companies encounter again and again the same or similar lithography unfriendly patterns (Hot-spots) in
different designs within the same technology node and across different technology nodes, which eluded design rule
check (DRC), but detected again and again in OPC verification step. Since Model-based OPC tool applies OPC on
whole-chip design basis, individual hot-spot patterns are treated same as the rest of design patterns, regardless of its
severity.
We have developed a methodology to detect those frequently appeared hot-spots in pre-OPC design, as well as post
OPC designs to separate them from the rest of designs, which provide the opportunity to treat them differently in early
OPC flow. The methodology utilizes the combination of rule based and pattern based detection algorithms. Some hotspot
patterns can be detected using rule-based algorithm, which offer the flexibility of detecting similar patterns within
pre-defined ranges. However, not all patterns can be detected (or defined) by rules. Thus, a pattern-based approach is
developed using defect pattern library concept. The GDS/OASIS format hot-spot patterns can be saved into a defect
pattern library. Fast pattern matching algorithm is used to detect hot-spot patterns in a design using the library as a
pattern template database. Even though the pattern matching approach lacks the flexibility to detect patterns' similarity,
but it has the capability to detect any patterns as long as a template exists. The pattern-matching algorithm can be either
exact match or a fuzzy match. The rule based and pattern based hot-spot pattern detection algorithms complement each
other and offer both speed and flexibility in hot spot pattern detection in pre-OPC and post-OPC designs.
In this paper, we will demonstrate the methodology in our OPC flow and the benefits of such methodology application
in production environment for 90nm designs. After the hot spot pattern detection, examples of special treatment to
selected hot spot patterns will be shown.
The methodology of lithography friendly design (LFD) has been widely adopted since it dramatically reduces cycle of
design revision as well as number of learning cycles to reach acceptable yield. LFD is, for example, the reduction
number of small jogs and notches in original, pre-OPC layouts. We can call them as OPC-unfriendly patterns since they
create unnecessarily complicated OPC patterns. They usually meet design rule so that DRC does not detect or screen
them out. Also, they make many errors after OPC because OPC model recognizes just as one of small features that it
should care. This generates many false alarms at OPC verification and mask rule check.
General approach to implement LFD is to update rule table or design rule by taking actual yield and failure analysis
data into consideration of database handling flow. Another method is the utilization of simulation to predict lithography
unfriendly designs. It takes time to setup excellent rule for accurate prediction even if they are very good approach as
fundamental solution for LFD. It will be better to have a simple solution with fast setup and improvement on major
lithography unfriendly designs such as small jogs and notches.
In this paper, we proposed new type of LFD flow which is the application of modified DRC step on LFD flow. This
modified DRC identifies OPC-unfriendly patterns, and changes to "OPC-friendly" as well as fixing design rule
violations. It is a pre-OPC layout treatment to remove small jogs and notches. After finding small jogs or notches, DRC
software removes jogs and notches. In this case, unnecessary OPC fragments could be avoided. Using this jog-fill
technique, we can dramatically reduce the incidence of necking or bridging, improve contact coverage, and, as a result, it
enhances the final yield and reliability of circuit.
As the resolution requirement downing 90 nm beyond, hole pattern is one of the most challenging features to print in
the semiconductor manufacturing process. Especially, when hole patterns have dense array of holes as they are consisted
of several columns with single row, there can be serious distorted form from desired patterns such as oval hole shape and
bridge between holes. It is due to nature of diffraction which generates interaction of diffracted light from near holes.
Overlap margin reduction by hole shape change as oval shape is very harmful in sub-90nm photolithography process
which has very narrow overlay margin. To increase overlap margin, it is necessary to solve these phenomenon. Optical
Proximity Correction (OPC) has been used for overcoming oval hole shape. Through the result of OPC modeling and
simulation, we could get optimized mask bias of hole. Sometimes, good experimental data will be help for this modeling
and OPC process. From these OPC simulation and experimental data, most compatible rule based OPC process could be
developed. In this paper, we suggest the method of improving oval hole shape by using OPC simulation and making rule
base OPC process from experimental data.
For the 90nm node and beyond, smaller Critical Dimension(CD) control budget is required and the ways to control good
CD uniformity are needed. Moreover Optical Proximity Correction(OPC) for the sub-90nm node demands more accurate
wafer CD data in order to improve accuracy of OPC model. Scanning Electron Microscope (SEM) is the typical method
for measuring CD until ArF process. However SEM can give serious attack such as shrinkage of Photo Resist(PR) by
burning of weak chemical structure of ArF PR due to high energy electron beam. In fact about 5nm CD narrowing occur
when we measure CD by using CD-SEM in ArF photo process. Optical CD Metrology(OCD) and Atomic Force
Microscopy(AFM) has been considered to the method for measuring CD without attack of organic materials. Also the
OCD and AFM measurement system have the merits of speed, easiness and accurate data. For model-based OPC, the
model is generated using CD data of test patterns transferred onto the wafer. In this study we discuss to generate accurate
OPC model using OCD and AFM measurement system.
KEYWORDS: Data modeling, Optical proximity correction, Data conversion, Process modeling, Critical dimension metrology, Reactive ion etching, Scanning electron microscopy, Photomasks, Etching, Image processing
OPC has become an indispensable tool used in deep sub-wavelength lithograph process enabling highly accurate CD
(Critical Dimension) control as design rule shrinks. Rule based OPC was widely acceptable in the past, however it has
recently turned toward model OPC according to the decreasing pattern size. Model based correction was first applied to
the optical proximity phenomenon because the image of sub-wavelength pattern is distorted severely during the optical
image transformation. In addition, more tight CD control required to compensate the process induced error effects from
etch or other process as well optical image can be achieved.
In this paper, we propose advanced OPC method to obtain better accuracy on the final target for sub-90nm technology.
This advanced method converts measured CD data into final CD target by using an equation. We compared the results
from the data converting method, suggested in this paper, with those from post-litho(DI), post-etch (FI) OPC model step
by step. Finally we confirmed that advanced new OPC method gives better accuracy than that from conventional OPC
model
KEYWORDS: Optical proximity correction, Bridges, Model-based design, Photomasks, Neodymium, Lithography, Data modeling, System on a chip, Logic devices, Process modeling
Conventional OPC fragmentation method operates under a set of simple guiding principles. All patterns are to be
uniform in finite size from edge of polygon. Within each fragment, the intensity profile (aerial image) and edge-placement
error (EPE) are calculated at a settled location. Finally, the length of the entire fragment is moved to correct
for the EPE at that location. This is to be often against simulation like a model based OPC. In the strict sense, model
based OPC is depended on simulation results not only moving of all fragments in the layout are reduced to zero but also
dividing of all polygon edges. This drastically increased data volume and the computation time required to perform OPC.
Therefore, more powerful fragmentation mechanism will be one of major factors for the success of OPC process.
In this study, a new approach of fragmentation has been tested, which reduces OPC correction error. First, we check
the weak point of all patterns using slope, EPE, MEEF and contrast. Second, weak points apply high frequency
fragmentation based on simulation contour images. The others are divided into normal correction recipe. This improves
to accurate OPC correction for weak point which can divide a fine classification. It also is possible to reduce OPC time
for non critical pattern applied moderate fragmentation.
OPC(Optical Proximity Correction) has become an indispensable tool used in deep sub-wavelength lithograph process enabling highly accurate CD (Critical Dimension) control as design rule shrinks. Current model based OPC is a combination of optical and process model to predict lithography process. At this time, the accurate OPC model can be made by accurate empirical measurement data. Therefore empirical measurement data affects OPC model directly. In the case of gate layer, it affects to device performance significantly and CD spec is controlled tightly. Because gate layer is hanging on between active area and sti area, the gate CD is affected by different sub layer stack and step height. This paper will analyze that the effect of sub layer on the OPC model and show difference EPE value results at the patterns such as iso line, iso space,pitch, line end and T_junction between poly and gate model using constant threshold model.
KEYWORDS: Optical proximity correction, Logic, Databases, System on a chip, Data modeling, Photomasks, Instrument modeling, Manufacturing, Electronics, Distortion
It is becoming difficult to achieve stable device functionality and yield due to the continuous reduction of layout dimensions. Lithographers must guarantee pattern fidelity throughout the entire range of nominal process variation and diverse layout.
Even though we use general OPC method using single model and recipe, we usually expect to obtain good OPC results and ensure the process margin between different devices in the sub-100nm technology node.
OPC Model usually predicts the distortion or behavior of layout through the simulation in the range of measured data. If the layout is out of range from the measured data, or CD difference occurred from the topology issue, we can not improve the OPC accuracy with a single OPC model.
In addition, as the design rule has decreased, it is extremely hard to obtain the efficient OPC result only with a single OPC recipe. We can not extract the optimized single OPC recipe which can cover all the various device and layout. Therefore, we can improve the OPC accuracy and reduce the turn around time related to the OPC operation and mask manufacturing in sub-100nm technology node by applying the optimized multi OPC recipes to the device which contains the various patterns like SoC.
KEYWORDS: Photomasks, Semiconducting wafers, Deep ultraviolet, Lithography, Laser systems engineering, Electron beam lithography, Critical dimension metrology, Scanners, Back end of line, Data modeling
The higher productivity of the DUV laser mask lithography system compared to the 50-KeV e-beam system offers the benefit of mask cost down at low k1 lithographic process. But the major disadvantage of the laser mask writing system is rounding effect of contact hole and line end. In this paper, we study wafer process margin effect of corner rounded contact hole and present mask CD specification of corner rounded contact hole written by DUV laser lithography system compared to 50KeV writing tool. The contact hole rounding changes contact hole area at the same mask CD and also change MEEF(Mask Error Enhancement Factor) even though the contact hole area is compensated by adjusting mask bias. If one change EBM3500 mask writer machine to Alta4300 mask writer machine for 160nm contact hole using KrF and 6% HT-PSM, one has to change mask bias, 3.2nm, to meet same wafer process condition.. The MEEF of ALTA4300 mask is 1.6% higher than that of EBM3500 mask at same effective target mask CD. And the mask CD specification written by ALTA4300 has to be set more tightly about 1.3 ~ 1.5% to meet same wafer process margin with EBM3500 mask.
Optical Proximity Correction (OPC) often reaches its limitation, especially low-k imaging. It results in yield drop by bridging, pinching, and other process window sensitive issues. It happens more when the original layout contains OPC-unfriendly patterns. With OPC-unfriendly layout, OPC model generates totally unexpected results such as narrow space, small jog, small serif and etc. Those unexpected OPC results induce bridged patterns as well as narrow process margin. And they will give direct yield loss of device.
Thus, it is critical to implement the flow for Litho Friendly Design (LFD) and nevertheless simulation-based OPC verification. In this study, a new approach of OPC has been tested, which contains the simulation based analysis of OPC failure and in turn out reconstruct OPC features in a way to fix not only bridging and pinching but also to improve process window. This proves to reduce mask respin by 50% or more. It also has been tried to be a complementary checking in addition to conventional CD monitor in pilot production.
Since an OPC engine makes model to fit wafer printed CD of OPC test mask to simulation CD of test pattern layout, the target CD of OPCed mask is not design CD but the CD of OPC test mask. So, the CD difference between OPC test mask and OPCed mask is one of the most important error source of OPC. We experimentally obtained OPC CD error of several patterns such as iso line, iso space, dense line, line end, effected by the mask MTT (mean to target) difference of the two masks on of 90nm logic pattern with an ArF attenuated mask having designed different MTT. The error is compared to simulated data that is calculated with MEEF (mask error enhancement factor) and EL (exposure latitude) data of these patterns. The good agreement of the experimental and calculated OPC error effected mask MTT error can make OPC error are predicted by mask CD error. Using by these calculation, we made mask CD window to meet OPC spec for 90nm ArF process.
As design rule is decreased, OPC accuracy has become the crucial factor for achieving stable device functionality and yield. Usually the lithography and the etching process conditions are main parameters impacting to the OPC accuracy. The OPC accuracy can be changed as function of process conditions, even if we use same OPC model. And we usually expect to obtain same OPC results between different devices in same technology node if we used same OPC model and process. But we observed different OPC results as function of devices as well as process conditions. We suspected this phenomenon was resulted from the different pattern density induced global etch bias variation. First of all, we will prove that the device dependency of OPC accuracy is come from pattern density induced etch bias effect. Finally, we will setup new OPC methodology to compensate this effect.
We have been interested in the effect of the residual solvent on lithographic performance. The concentration distribution of solvent molecules along the film depth and the amount of residual solvents depend on their physical properties: evaporation rate, boiling point, viscosity, and so on. Since fast-evaporating solvent can make a dense skin-like layer at the top of the resist film, faster evaporation rate of solvent makes thicker film, while slow rate results in thinner film. And the amount of residual solvent is dependent of the evaporation rate of the casting solvent. The amount of residual solvent was verified by TGA method. It was found that the amount of residual solvent is a major parameter to determine film thickness, stiffness of resist pattern, acid diffusion length, and pattern profile shape.
The characteristics of the carbon nanotube AFM tip was investigated as it is used to measure the critical dimensions in the high aspect ratio structures. The research has been done to demonstrate the limitations of the CNT probe in imaging steep or vertical sidewall. Two kinds of samples, silicon dot and the lines in the ArF resist pattern were profiled by using carbon nanotube tip in the tapping mode AFM. There is a large oscillation at the steep sidewall, which cannot be controlled by merely changing scan variables, except by slowing down the scan up to the impractical level. The interaction between the long, slim CNT probe and the vertical sidewall severely limits the usefulness of AFM as a CD metrology tool. To achieve hi-resolution and high aspect ratio imaging simultaneously, a stiffer and/or modifed probe under clever non-contact 2D feedback is needed.
Crown ether derivatives are composed of multi-ethyleneoxy units and have an electron rich cavity that can accommodate a proton. We have broadly investigated the effect of lone pair electrons of accumulated oxygen. First, we studied whether these crown compounds can control acid diffusion or not. Second, we synthesized monomers containing cyclic multi-ethyleneoxy units and studied their effect in polymers. Finally, we compared them with amines. Crown either, 18-crown-6, has a proper cavity to capture a proton by hydrogen bonding and actually had enough basicity to control acid diffusion. These studies show that crown ether derivatives can replace amines as a bases to restrain acid diffusion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.