Design For Manufacturing (DFM) is becoming essential to ensure good yield for deep sub micron technologies. As
design rules cannot anticipate all manufacturing marginalities resulting from problematic 2D patterns, the latter has to be
addressed at design level through DFM tools.
To deploy DFM strategy on back end levels, STMicroelectronics has implemented a CAD solution for lithographic
hotspots search and repair. This allows the detection and the correction, at the routing step, of hotspots derived from
lithographic simulation after OPC treatment.
The detection of hotspots is based on pattern matching and the repair uses local reroute ability already implemented in
Place and Route (PnR) tools. This solution is packaged in a Fast LFD Kit for 28 nm technology and fully integrated in
PnR platforms. It offers a solution for multi suppliers CAD vendors routed designs. To ensure a litho friendly repair, the
flow integrates a step of local simulation of the rerouted zones.
This paper explains the hotspots identification, their detection through pattern matching and repair in the PnR platform.
Run time, efficiency rate, timing and RC parasitic impacts are also analyzed.
As the OPC scripts become more and more complex for advanced technology nodes, the number of parameters
used to control the convergence increases drastically. This paper does not aim to determine what a "good
convergence criteria" is but rather to review the efficiency of the existing OPC solutions in terms of accuracy
and parameter dependence, to solve simple design layouts. Three different OPC solutions, including a "standard
algorithm", a "local convergence OPC" and a more holistic OPC, are compared on a design containing lines and
line-ends. A cost function is used to determine the quality of the convergence for each type of structure. A map
of convergence (iteration vs OPC Option) will be deduced for each structure.
Source Mask Optimization (SMO) technique is an advanced resolution
enhancement technique with the goal of extending optical lithography
lifetime by enabling low k1 imaging [1,2]. On that purpose, an appropriate
source and mask duo can be optimized for a given design.
SMO can yield freeform sources that can be realized to a good accuracy
with optical systems such as the FlexRay [3],. However, it had been
showen that even the smallest modification of the source can impact the
wafer image or the process.[4] Therefore, the pupil has to be qualified, in
order to measure the impact of any source deformation[5].
In this study we will introduce a new way to qualify the difference
between sources, based on a Zernike polynomial decomposition [6]. Such
a method can have several applications: from quantifying the scanner to
scanner pupil difference, to comparing the source variation depending of
the SMO settings etc. The straighforward Zernike polynomial decomposition
allow us to identify some classic optical issues like coma or lens
aberration.
To print sub 22nm node features, current lithography technology faces some tool limitations. One
possible solution to overcome these problems is to use the double patterning technique (DPT). The
principle of the double patterning technique is pitch splitting where two adjacent features must be
assigned opposite masks (colors) corresponding to different exposures if their pitch is less than a
predefined minimum coloring pitch. However, certain design orientations for which pattern features
separated by more than the minimum coloring pitch cannot be imaged with either of the two exposures.
In these directions, the contrast and the process window are degraded because constructive
interferences between diffractive orders in the pupil plane are not sufficient. The 22nm and 16nm nodes
require the use of very coherent sources that will be generated using SMO (source mask cooptimization).
Such pixelized sources while helpful in improving the contrast for selected
configurations, can lead to degrade it for configurations which have not been counted for during the
SMO process. Therefore, we analyze the diffractive orders interactions in the pupil plane in order to
detect these limited orientations in the design and thus propose a new double patterning decomposition
algorithm to enlarge the process window and the contrast of each mask.
The 22-nm technology node presents a real breakthrough compared to previous nodes in the way that state of the
art scanner will be limited to a numerical aperture of 1.35. Thus we cannot "simply" apply a shrink factor from
the previous node, and tradeoffs have to be found between Design Rules, Process integration and RET solutions
in order to maintain the 50% density gain imposed by the Moore's law. One of the most challenging parts to
enable the node is the ability to pattern Back-End Holes and Metal layers with sufficient process window. It is
clearly established that early process for these layers will be performed by double patterning technique coupled
with advanced OPC solutions.
In this paper we propose a cross comparison between possible double patterning solutions: Pitch Splitting (PS)
and Sidewall Image Transfer (SIT) and their implication on design rules and CD Uniformity. Advanced OPC
solutions such as Model Based SRAF and Source Mask Optimization will also be investigated in order to ensure
good process control.
This work is a part of the Solid's JDP between ST, ASML and Brion in the framework of Nano2012 sponsored
by the French government.
Source Mask Optimization (SMO) technique is an advanced RET with the goal of extending optical lithography lifetime by enabling low k1 imaging [1,2]. Most of the literature concerning SMO has so far focused on PV (process variation) band, MEEF and PW (process window) aspects to judge the performance of the optimization as in traditional OPC [3]. In analogy to MEEF impact for low k1 imaging we investigate the source error impact as SMO sources can have rather complicated forms depending on the degree of freedom allowed during optimization.
For this study we use Tachyon SMO tool on a 22nm metal design test case. A free form and parametric source solutions are obtained using MEEF and PW requirements as main criteria. For each type of source, a source perturbation is introduced to study the impact on lithography performance. Based on the findings we conclude on the choice of freeform or parametric as a source and the importance of source error in the optimization process.
In double patterning technology (DPT), two adjacent features must be assigned opposite colors,
corresponding to different exposures if their pitch is less than a predefined minimum coloring pitch.
However, certain design orientations for which pattern features separated by more than the minimum
coloring pitch cannot be imaged with either of the two exposures. In such cases, there are no aerial
images formed because in these directions there are no constructive interferences between diffractive
orders in the pupil plane. The 22nm and 16nm nodes require the use of pixelized sources that will be
generated using SMO (source mask co-optimization). Such pixelized sources while helpful in
improving the contrast for selected configurations can lead to degraded contrast for configurations
which have not been set during the SMO process. Therefore, we analyze the diffractive orders
interactions in the pupil plane in order to detect limited orientations in the design and thus propose a
decomposition to overcome the problem.
At 32 nm node and beyond, one of the most critical processes is the holes patterning due to the Depth of Focus (DOF)
that becomes rapidly limited. Thus the use of Sub Resolution Assist Features (SRAF) becomes mandatory to keep DOF
at a sufficient level through pitch.
SRAF are generally generated using Rule Based OPC with a different cleaning step to avoid risk of SRAF printing or
conflict with main feature. One of the key challenges of using such a technique is the ability of placing SRAF in random
holes features. The rule based approach cannot treat all the configurations resulting in non-optimal SRAF placement for
certain main feature. On the other hand, Inverse Lithography has shown the ability of generating SRAF at the ideal size
and position (theoretically) 1 and interest of this technique has been proven experimentally 2,3. Nevertheless, this kind of
technique is not yet ready for maskshop due to MRC limitation caused by the pixelated SRAF output, and the important
mask writing time due to the shotcount 4.
In this paper we propose to make a comparison of the two approaches on random 2D features. We will see that Inverse
Lithography permits to keep a sufficient DOF on 2D features configurations where Rule based appears to be limited.
Simulated and experimental results will be presented comparing Rule based, Ideal and MRC constraint SRAF in terms of
DOF and Runtime performance for hole patterning
Double patterning (DP) is one of the main options to print devices with half pitch less than 45nm. The basis of DP is to
decompose a design into two masks. In this work we focus on the decomposition of the contact pattern layer. Contacts
with pitch less than a split pitch are assigned to opposite masks corresponding to different exposures. However, there
exist contact pattern configurations for which features can not be assigned to opposite masks. Such contacts are flagged
as color conflicts. With the help of design of manufacturing (DFM), the contact conflicts can be reduced through
redesign. However, even the state of the art DFM redesign solution will be limited by area constraints and will introduce
delays to the design flow. In this paper, we propose an optical method for contact conflicts treatment. We study the
impact of the split on imaging by comparing inverse lithography technology (ILT), optical proximity correction (OPC)
and source mask co-optimization (SMO) techniques. The ability of these methods to solve some split contacts conflicts
in double patterning are presented.
Optical Proximity Correction (OPC) is used in lithography to increase the achievable resolution and pattern transfer
fidelity for IC manufacturing. Nowadays, immersion lithography scanners are reaching the limits of optical resolution
leading to more and more constraints on OPC models in terms of simulation reliability. The detection of outliers coming
from SEM measurements is key in OPC [1]. Indeed, the model reliability is based in a large part on those measurements
accuracy and reliability as they belong to the set of data used to calibrate the model. Many approaches were developed
for outlier detection by studying the data and their residual errors, using linear or nonlinear regression and standard
deviation as a metric [8].
In this paper, we will present a statistical approach for detection of outlier measurements. This approach consists of
scanning Critical Dimension (CD) measurements by process conditions using a statistical method based on fuzzy CMean
clustering and the used of a covariant distance for checking aberrant values cluster by cluster. We propose to use
the Mahalanobis distance [2] in order to improve the discrimination of the outliers when quantifying the similarity within
each cluster of the data set.
This fuzzy classification method was applied on the SEM CD data collected for the Active layer of a 65 nm half pitch
technology. The measurements were acquired through a process window of 25 (dose, defocus) conditions. We were able
to detect automatically 15 potential outliers in a data distribution as large as 1500 different CD measurement. We will
discuss about these results as well as the advantages and drawbacks of this technique as automatic outliers detection for
large data distribution cleaning.
In advanced technology nodes, due to accuracy and computing time constraint, OPC has shifted from discrete simulation
to pixel based simulation. The simulation is grid based and then interpolation occurs between grid points. Even if the
sampling is done below Nyquist rate, interpolation can cause some variations for same polygon placed at different
location in the layout. Any variation is rounded during OPC treatment, because of discrete numbers used in OPC output
file. The end result is inconsistency in post-OPC layout, where the same input polygon will give different outputs,
depending on its position and orientation relative to the grid. This can have a major impact in CD control, in structures
like SRAM for example, where mismatching between gates can cause major issue.
There are some workarounds to minimize this effect, but most of them are post-treatment fix. In this paper, we will try to
identify and solve the root cause of the problem. We will study the relationship between the pixel size and the
consistency of post OPC results. The pixel size is often set based on optical parameters, but it might be possible to
optimize it around this value to avoid inconsistency. One can say that the optimization will highly depend on design and
not be possible for a real layout. As the range of pitch used in a design tends to decrease, thanks to fix pitch layouts, we
may optimize pixel size for a full layout.
In the continuous battle to improve critical dimension (CD) uniformity, especially for 45-nanometer (nm) logic
advanced products, one important recent advance is the ability to accurately predict the mask CD uniformity
contribution to the overall global wafer CD error budget. In most wafer process simulation models, mask error
contribution is embedded in the optical and/or resist models. We have separated the mask effects, however, by
creating a short-range mask process model (MPM) for each unique mask process and a long-range CD
uniformity mask bias map (MBM) for each individual mask. By establishing a mask bias map, we are able to
incorporate the mask CD uniformity signature into our modelling simulations and measure the effects on global
wafer CD uniformity and hotspots. We also have examined several ways of proving the efficiency of this
approach, including the analysis of OPC hot spot signatures with and without the mask bias map (see Figure 1)
and by comparing the precision of the model contour prediction to wafer SEM images. In this paper we will
show the different steps of mask bias map generation and use for advanced 45nm logic node layers, along with
the current results of this new dynamic application to improve hot spot verification through Brion Technologies'
model-based mask verification loop.
Leveraging silicon validation, a model-based variability analysis has been implemented to detect sensitivity to systematic
variations in standard cell libraries using a model-based solution, to reduce performance spread at the cell level and chip
level. First, a simulation methodology to predict changes in circuit characteristics due to systematic lithography and etch
effects is described and validated in silicon. This methodology relies on these two foundations: 1) A physical shape
model predicts contours from drawn layout; 2) An electrical device model, which captures narrow width effects,
accurately reproduces drive currents of transistors based on silicon contours. The electrical model, combined with
accurate lithographic contour simulation, is used to account for systematic variations due to optical proximity effects and
to update an existing circuit netlist to give accurate delay and leakage calculations.
After a thorough validation, the contour-based simulation is used at the cell level to analyze and reduce the sensitivity of
standard cells to their layout context. Using a random context generation, the contour-based simulation is applied to each
cell of the library across multiple contexts and litho process conditions, identifying systematic shape variations due to
proximity effects and process variations and determining their impact on cell delay.
This methodology is used in the flow of cell library design to identify cells with high sensitivity to proximity effects and
consequently, large variation in delay and leakage. The contour-based circuit netlist can also be used to perform accurate
contour-based cell characterization and provide more silicon-accurate timing in the chip-design flow. A cell-variability
index (CVI) can also be derived from the cell-level analysis to provide valuable information to chip-level design
optimization tools to reduce overall variability and performance spread of integrated circuits at 65nm and below.
At 45 and 32 nm nodes, one of the most critical layers is the Contact one. Due to the use of hyper NA imaging, the
depth of focus starts to be very limited.
Moreover the OPC is rapidly limited because of the increase of the pattern density. The limited surface in the dark field
region of a Contact layer mask enforces the edges movement to stop very quickly.
The use of SRAF (Sub Resolution Assist Feature) has been widely use for DOF enhancement of line and space layers
since many technology node. Recently, SRAF generated using inverse lithography have shown interesting DOF
improvement1. However, the advantage of the ideal mask generated by inverse lithography is lost when switching to a
manufacturable mask with Manhattan structures. For SRAF placed in rule based as well as Manhattan SRAF generated
after inverse lithography, it is important to know what their behavior is, in term of size and placement.
In this article we propose to study the placement of scatter-trenches assist features for the contact layer. For this we have
performed process window simulation with different SRAF sizes and distance to the main OPC. These results permit us
to establish the trends for size and placement of the SRAF.
Moreover we have also take a look of the advantages of using 8 surrounding SRAF (4 in vertical - horizontal and 4 at
45°) versus 4 surrounding SRAF. Based on these studies we have seen that there is no real gain of increasing the
complexity by adding additional SRAF.
KEYWORDS: Photomasks, Optical proximity correction, 3D modeling, Semiconducting wafers, Diffraction, Scattering, Near field, Lithographic illumination, Systems modeling, Near field optics
The perpetual shrinking in critical dimensions in semiconductor devices is driving the need for increased resolution in optical lithography. Increasing NA to gain resolution also increases Optical Proximity Correction (OPC) model complexity. Some optical effects which have been completely neglected in OPC modeling become important. Over the past few years, off-axis illumination has been widely used to improve the imaging process. OPC models which utilize such illumination still use the thin film mask approximation (Kirchhoff approach), during optical model generation, which utilizes a normal incidence. However, simulating a three dimensional mask near-field using an off-axis illumination requires OPC models to introduce oblique incidence. In addition, the use of higher NA systems introduces high obliquity field components that can no longer be assimilated as normal incident waves. The introduction of oblique incidence requires other effects, such as corner rounding of mask features, to be considered, that are seldom taken into account in OPC modeling. In this paper, the effects of oblique incidence and corner rounding of mask features on resist contours of 2D structures (i.e. line-ends and corners) are studied. Rigorous electromagnetic simulations are performed to investigate the scattering properties of various lithographic 32nm node mask structures. Simulations are conducted using a three dimensional phase shift mask topology and an off-axis illumination at high NA. Aerial images are calculated and compared with those obtained from a classical normal incidence illumination. The benefits of using an oblique incidence to improve hot-spot prediction will be discussed.
One of the most critical points for accurate OPC is to have accurate models that properly simulate the full process from
the mask fractured data to the etched remaining structures on the wafer. In advanced technology nodes, the CD error
budget becomes so tight that it is becoming critical to improve modeling accuracy. Current technology models used for
OPC generation and verification are mostly composed of an optical model, a resist model and sometimes an etch model.
The mask contribution is nominally accounted for in the optical and resist portions of these models. Mask processing
has become ever more complex throughout the years so properly modeling this portion of the process has the potential
to improve the overall modeling accuracy. Also, measuring and tracking individual mask parameters such as CD bias
can potentially improve wafer yields by detecting hotspots caused by individual mask characteristics. In this paper, we
will show results of a new approach that incorporates mask process modeling. We will also show results of testing a
new dynamic mask bias application used during OPC verification.
Patterning isolated trenches for bright field layers such as the active layer has always been difficult for lithographers.
This patterning is even more challenging for advanced technologies such as the 45-nm node where most of the process
optimization is done for minimum pitch dense lines.
Similar to the use of scattering-bars to assist isolated lines structures, we can use inverse Sub Resolution Assist Features
(SRAF) to assist the patterning of isolated trenches structures.
Full characterization studies on the C45 Active layer demonstrate the benefits and potential issues of this technique: Screen Inverse SRAF parameters (size, distance to main feature) utilizing optical simulation; Verify simulation predictions and ensure sufficient improvement in Depth of Focus and Exposure latitude with
silicon process window analysis; Define Inverse SRAF OPC generation script parameters and validate, with accurate on silicon, measurement
characterization of specific test patterns; Maskshop manufacturability through CD measurements and inspection capability.
Finally, initial silicon results from a 45nm mask are given with suggestions for additional optimization of inverse SRAF
for trenches.
Several qualification stages are required for new maskshop tools, first step is done by the maskshop internally. Taking
a new writer for example, the maskshop will review the basic factory and site acceptance tests, including CD
uniformity, CD linearity, local CD errors and registration errors. The second step is to have dedicated OPC (Optical
Proximity Correction) structures from the wafer fab. These dedicated OPC structures will be measured by the
maskshop to get a reticle CD metrology trend line.
With this trend line, we can:
- ensure the stability at reticle level of the maskshop processes
- put in place a matching procedure to guarantee the same OPC signature at reticle level in case of any
internal maskshop process change or new maskshop evaluation. Changes that require qualification could
be process changes for capacity reasons, like introducing a new writer or a new manufacturing line, or for
capability reasons, like a new process (new developer tool for example) introduction.
Most advanced levels will have dedicated OPC structures. Also dedicated maskshop processes will be monitored with
these specific OPC structures.
In this paper, we will follow in detail the different reticle CD measurements of dedicated OPC structures for the three
advanced logic levels of the 65nm node: poly level, contact level and metal level. The related maskshop's processes are
- for poly: eaPSM 193nm with a nega CAR (Chemically Amplified Resist) process for Clear Field L/S
(Lines & Space) reticles
- for contact: eaPSM 193nm with a posi CAR process for Dark Field Holes reticles
- for metal1: eaPSM 193nm with a posi CAR process for Dark Field L/S reticles.
For all these structures, CD linearity, CD through pitch, length effects, and pattern density effects will be monitored.
To average the metrology errors, the structures are placed twice on the reticle.
The first part of this paper will describe the different OPC structures. These OPC structures are close to the DRM
(Design Rule Manual) of the dedicated levels to be monitored.
The second part of the paper will describe the matching procedure to ensure the same OPC signature at reticle level.
We will give an example of an internal maskshop matching exercise, which could be needed when we switched from
an already qualified 50 KeV tool to a new 50 KeV tool.
The second example is the same matching exercise of our 65nm OPC structures, but with two different maskshops.
The last part of the paper will show first results on dedicated OPC structures for the 45nm node.
C045 node (65nm half pitch) technology processes are driving the development of immersion lithography techniques and infrastructures and C032 node (45nm half pitch) is following in its tracks. As semiconductor development enters the arena of low leakage, high-performance devices using immersion lithography, the 45nm hp technology adds more pressure of decreasing pitches and feature sizes using the most cost effective method available. The Crolles2 Alliance is in the first phases of the push for very low k1 193nm lithography for our technology development. Many resolution enhancement techniques are being explored to fill the low k1 realm; including implementation of these techniques and more aggressive integrations to support the device parameters.
However, the early development of 45nm hp node along with the need for better focus and dose control algorithms, imaging of pitches to allow for the packing density will present significant challenges to photolithography even when considering super hyper-NA immersion lithography. Reflectivity variations, thin film interference through the complex film stacks, and increased sensitivity to feature size is posing a challenge for maintaining good and consistent features.
This paper discusses an analysis and early results covering the beginning development of 45nm hp with >1NA immersion lithography. Specifically, parameters such as illumination and enhancement techniques, processing capability, application of OPC at a very low k1, process integration, mask effects, and defectivity as discussed.
Resolution Enhancement Techniques (RET) are inherently design dependent technologies. To be successful the RET strategy needs to be adapted to the type of circuit desired. For SOC (system on chip), the three main patterning constraints come from:
-Static RAM with very aggressive design rules specially at active, poly and contact
-transistor variability control at the chip level
-random layouts
The development of regular layouts, within the framework of DFM, enables the use of more aggressive RET, pushing the required k1 factor further than allowed with existing RET techniques and the current wavelength and NA limitations. Besides that, it is shown that the primary appeal of regular design usage comes from the significant decrease in transistor variability. In 45nm technology a more than 80% variability reduction for the width and the length of the transistor at best conditions, and more than 50% variability reduction though the process window has been demonstrated. In addition, line-end control in the SRAM bitcell becomes a key challenge for the 32nm node. Taking all these constraints into account, we present the existing best patterning strategy for active and poly level of 32nm :
-dipole with polarization and regular layout for active level
-dipole with polarization, regular layout and double patterning to cut the line-end for poly level.
These choices have been made based on the printing performances of a 0.17&mgr;m2 SRAM bitcell and a 32nm flip-flop with NA 1.2 immersion scanner.
As semiconductor technology moves toward and beyond the 65 nm lithography node, the importance of Optical
Proximity Correction (OPC) models grows due to the lithographer's need to ensure high fidelity in the mask-
to-silicon transfer. This, in turn, causes OPC model complexity to increase as NA increases and minimum
feature size on the mask decreases. Subtle effects, that were considered insignificant, can no longer be ignored.
Depending on the imaging system, three dimensional mask effects need to be included in OPC modeling. These
effects can be used to improve model accuracy and to better predict the final process window. In this paper,
the effects of 3D mask topology on process window are studied using several 45 nm node mask structure types.
Simulations are conducted with and without a polarized illumination source. The benefits of using an advanced model algorithm, that comprehends 3D mask effects, will be discussed. To quantify the potential impact of this methodology, relative to current best known practices, all results are compared to those obtained from a model using a conventional thin film mask.
The quality of model-based OPC correction depends strongly on how the model is calibrated in order to generate a resist image as close to the desired shapes as possible. As the k1 process factor decreases and design complexity increases, the correction accuracy and the model stability become more important. It is also assumed that the stability of one model can be tested when its response to a small variation in one or several parameters is small. In order to quantify this, the small-variation method has been tested on a variable threshold based model initially optimized for the 65nm node using measurements done with a test pattern mask. This method consists of introducing small variations to one input model parameter and analyzing the induced effects on the simulated edge placement error (EPE). In this paper, we study the impact of small changes in the optical and resist parameters (focus settings, inner and outer partial coherent factors, NA, resist thickness) on the model stability. And then, we quantify the sensitivity of the model towards each parameter shift. We also study the effects of modeling parameters (kernel count, model fitness, optical diameter) on the resulting simulated EPE. This kind of study allows us to detect coverage or process window problems. The process and modeling parameters have been modified one by one. The ranges of variations correspond to those observed during a typical experiment. Then the difference in simulated EPE between the reference model and the modified one has been calculated. Simulations show that the loss in model accuracy is essentially caused by changes in focus, outer sigma and NA and lower values of optical diameter and kernel count. Model results agree well with a production layout.
Ensuring robust patterning after OPC is becoming more and more difficult due to the continuous reduction of layout dimensions and diminishing process windows associated with each successive lithographic generation. Lithographers must guarantee high imaging fidelity throughout the entire range of normal process variations. To verify the printability of a design across process window, compact optical models similar to those used for standard OPC are used. These models are calibrated from experimental data measured at the limits of the process window. They are then applied to the design to predict potential printing failures. This approach has been widely used for dry lithography. With the emergence of immersion lithography in production in the IC industry, the predictability of this approach has to be validated on this new lithographic process. In this paper, a comparison between the dry lithography process model and the immersion lithography process model is presented for the Poly layer at 65 nm node patterning. Examples of specific failure predictions obtained separately with the two processes are compared with experimental results. A comparison in terms of process performance will also be a part of this study.
In the last 2 years, the semiconductor industry has recognized the critical importance of verification for optical proximity correction (OPC) and reticle/resolution enhancement technology (RET). Consequently, RET verification usage has increased and improved dramatically. These changes are due to the arrival of new verification tools, new companies, new requirements and new awareness by product groups about the necessity of RET verification. Currently, as the 65nm device generation comes into full production and the 45nm generation starts full development, companies now have the tools and experience (i.e., long lists of previous errors to avoid) needed to perform a detailed analysis of what is required for 45nm and 65nm RET verification. In previous work [1] we performed a theoretical analysis of OPC & RET verification requirements for the 65nm and 45nm device generations and drew conclusions for the ideal verification strategy. In this paper, we extend the previous work to include actual observed verification issues and experimental results. We analyze the historical experimental issues with regard to cause, impact and optimum verification detection strategy. The results of this experimental analysis are compared to the theoretical results, with differences and agreement noted. Finally, we use theoretical and experimental results to propose an optimized RET verification strategy to meet the user requirements of 45nm development and the differing requirements of 65nm volume production.
KEYWORDS: Critical dimension metrology, Lithography, Logic, Scanners, Semiconductors, Transistors, Design for manufacturing, Group IV semiconductors, Optical proximity correction, Process control
The continued downscaling of the feature sizes and pitches for each new process generation increases the challenges for obtaining sufficient process control. As the dimensions approach the limits of the lithographic capabilities, new solutions for improving the printability are required. Including the design into the optimization process significantly improves the printability. The use of litho-driven designs becomes increasingly important towards the 45 nm node. The litho-driven design is applied to the active, gate, contact and metal layers. It has been shown previously, that the impact on the chip area is negligible. Simulations have indicated a significant improvement in controlling the critical dimensions of the gate layer. In this paper, we present our first results of an experimental validation of litho-driven designs printed on an immersion scanner. In our design we use a fixed pitch approach that allows to match the illumination conditions to those for the memory structures. The impact on the chip area and on the CD control will be discussed. The resulting improvement in CD control is demonstrated experimentally by comparing the experimental results of litho-driven and standard designs. A comparison with simulations will be presented.
Ensuring robust patterning after OPC is becoming more and more difficult due to the continuous reduction of layout
dimensions and diminishing process windows associated with each successive lithographic generation. Lithographers must
guarantee high imaging fidelity throughout the entire range of normal process variations. As a result, post-OPC verification
methods have become indispensable tools for avoiding pattern printing issues. The majority of these methods are primarily
based on lithographic simulations of pattern printing behaviour across dose and focus variations. The models used for these
simulations are compact optical models combined with one single resist model. Even if very predictive resist models exist,
they have often a large number of parameters to fit and suffer from long computing times to execute the simulations.
Simplified resist models are thus needed to enhance run-time computing during simulation.
The objective of this study is to test the predictability of such resist models across the process window. Two
different resist models will be considered in this study. The first resist model is a pure variable threshold resist model. The
second resist modelling approach is a simplified physical model which uses Gaussian convolutions and a constant threshold
to model resist printing behaviour. The study concentrates on poly layer patterning for the 65 nm node. Examples of specific
simulations obtained with the two different techniques are compared against experimental results.
Despite the complexity of AAPSM patterning using the complementary PSM approach with respect to OPC correction, mask making, fab logistics etc, the technique still remains a valuable solution for special products where a low CD dispersion printing process is required. For current and next generation process technologies (90-65nm ground rules), the most common alternating mask solution of single trench etch with or without undercut becomes more difficult to manufacture. Especially challenging is the aspect ratio control of quartz etched trenches as a function of density in order to assure the correct phase angle and sidewall for dense and isolated structures over all phase shifted geometries. In order to solve this problem, a modified mask architecture is proposed, called the Transparent Etch Stop Layer (TESL) phase shift mask. In TESL, a transparent (etch stop) layer is deposited on the quartz substrate, followed by the deposition of a quartz layer having a thickness corresponding to the required phase angle for the used wavelength. On top a Chromium layer will be deposited. The patterning of this mask will be quite similar to the single trench variant. The difference is, that now an overetch can be applied for the phase definition resulting from the high etch selectivity of quartz to the etch stop material. The result of this approach should be that we can better control the phase depth and sidewall angle for dense and isolated structures. In this paper we will discuss the results of the printing tests performed using TESL masks especially with respect to litho process window, and we will compare these with the single trench undercut approach. Simulation results are presented with respect to shifter sidewall profile and TESL thickness in order to optimize image imbalance. Throughout the study we will correlate simulations and measurements to the after-MBOPC CD values for the shifter structures. These results will allow us to determine if the TESL AAPSM approach can be a more effective alternative to the single trench undercut approach.
Specifications for CD control on current technology nodes have become very tight, especially for the gate level. Therefore all systematic errors during the patterning process should be corrected. For a long time, CD variations induced by any change in the local periodicity have been successfully addressed through model or/and rule based corrections. However, if long-range effects (stray light, etch, and mask writing process...) are often monitored, they are seldom taken into account in OPC flows.
For the purpose of our study, a test mask has been designed to measure these latter effects separating the contributions of three different process steps (mask writing, exposure and etch). The resulting induced CD errors for several patterns are compared to the allowed error budget. Then, a methodology, usable in standard OPC flows, is proposed to calculate the required correction for any feature in any layout. The accuracy of the method will be demonstrated through experimental results.
The 65nm and 45nm device generations will be used to manufacture large designs using complex patterning processes in combination with exotic model-based or rule-based RETs’ scenarios. The lithography for these generations will operate in the low k1 regime value resulting in small process window and tight overlay requirements. Therefore, the potential for having yield limiting errors due to RET-process-design interactions is significantly higher than with the 130nm generation.
Additionally, the high cost of reticles and the large number of process layers make it quite important to catch these costly errors.
Optical Rule Checking (ORC) is an effective way to predict failure on wafer shapes. Used in addition to Optical Proximity Correction, it can help to reduce failures affecting yield in manufacturing. Thus, due to the inter-layer complexity of processes and RET, the necessity to check accurately particular areas which could generate costly errors is growing:
Here are some examples: 1) Low metal-contact or metal-via overlaps, 2) Small poly extension past active area, 3) Low overlap between poly and contact layers, and 4) Dual exposure techniques for single layer patterning.
The main difficulty in current implementation of multiple layer RET verification is the trade off between accuracy vs. runtime vs. fault coverage.
In this paper we will demonstrate how based on this trade off we can enhance our final printed results by accurately targeting the most likely failure mechanism on multiple layer processes check in a production environment (90nm node product layout). Finally we will show how ORC in a multiple layer check is going to help detect faults and overlay sensitive areas so as to secure process weakness areas.
We will compare several softwares where such a methodology is applied and attend to propose a post OPC verification strategy to obtain a more robust manufacturing process.
Scattered light in optical lithography, also known as flare, has been shown to cause potentially significant linewidth variation at low-k1 values. The interaction radius of this effect can extend essentially from zero to the full range of a product die and beyond. Because of this large interaction radius the correction of the effect can be very computation-intensive. In this paper, we will present the results of our work to characterize the flare effect for 65nm and 90nm poly processes, model that flare effect as a summation of gaussian convolution kernels, and correct it within a hierarchical model based OPC engine. Novel methods for model based correction of the flare effect, which preserve much of the design hierarchy, is discussed. The same technique has demonstrated the ability to correct for long-range loading effects encountered during the manufacture of reticles.
For the 90nm node and below the Depth of Focus (DOF) becomes more and more critical. To increase the DOF lithographers have introduced resolution enhancement techniques (RET) such as sub-resolution assist features (SRAF) which are today largely used by the semiconductor industry for 120nm, 90nm and 65nm technologies. Bruce Smith [1] showed that the improvement of the DOF from the adding of the scatter bars depends on the position of the iso-focal intensity threshold compared to the critical dimension (CD) intensity threshold. When these two points are at the same position the DOF is maximum. This paper shows the theoretical link between the iso-focal point and the evolution of the DOF. It will be shown that the link between these two parameters can be described by a simple equation. The theoretical expression shows a good estimation of the DOF evolution. The theoretical evolution of the iso-focal point is obtained from the expressions of the intensity. We will see that its variation is basically a function of the transmission and of the diffraction orders interfering. The expressions giving the evolution of the iso- focal point follows the trends obtained by conventional lithography simulation. We have studied the theoretical evolution of the iso-focal point for the mask types used by the semiconductor industry such as binary, alternating and attenuated phase shift masks. We will also see how this evolution of the iso-focal point impacts the depth of focus and that the DOF can be improved by an adjustment of the iso-focal point.
KEYWORDS: Data modeling, Optical proximity correction, Calibration, Printing, Optical lithography, Scanning electron microscopy, Process modeling, 3D modeling, Lithography, Photomasks
It is becoming more and more difficult to ensure robust patterning after OPC due to the continuous reduction of layout dimensions and diminishing process windows associated with each successive lithographic generation. Lithographers must guarantee high imaging fidelity throughout the entire range of normal process variations. The techniques of Mask Rule Checking (MRC) and Optical Rule Checking (ORC) have become mandatory tools for ensuring that OPC delivers robust patterning. However the first method relies on geometrical checks and the second one is based on a model built at best process conditions. Thus those techniques do not have the ability to address all potential printing errors throughout the process window (PW). To address this issue, a technique known as Critical Failure ORC (CFORC) was introduced that uses optical parameters from aerial image simulations. In CFORC, a numerical model is used to correlate these optical parameters with experimental data taken throughout the process window to predict printing errors. This method has proven its efficiency for detecting potential printing issues through the entire process window [1]. However this analytical method is based on optical parameters extracted via an optical model built at single process conditions. It is reasonable to expect that a verification method involving optical models built from several points throughout PW would provide more accurate predictions of printing errors for complex features. To verify this approach, compact optical models similar to those used for standard OPC were built and calibrated with experimental data measured at the PW limits. This model is then applied to various test patterns to predict potential printing errors. In this paper, a comparison between these two approaches is presented for the poly layer at 65 nm node patterning. Examples of specific failure predictions obtained separately with the two techniques are compared with experimental results. The details of implementing these two techniques on full product layouts are also included in this study.
As lithography and other patterning processes become more complex and more non-linear with each generation, the task of physical design rules necessarily increases in complexity also. The goal of the physical design rules is to define the boundary between the physical layout structures which will yield well from those which will not. This is essentially a rule-based pre-silicon guarantee of layout correctness. However the rapid increase in design rule requirement complexity has created logistical problems for both the design and process functions. Therefore, similar to the semiconductor industry's transition from rule-based to model-based optical proximity correction (OPC) due to increased patterning complexity, opportunities for improving physical design restrictions by implementing model-based physical design methods are evident. In this paper we analyze the possible need and applications for model-based physical design restrictions (MBPDR). We first analyze the traditional design rule evolution, development and usage methodologies for semiconductor manufacturers. Next we discuss examples of specific design rule challenges requiring new solution methods in the patterning regime of low K1 lithography and highly complex RET. We then evaluate possible working strategies for MBPDR in the process development and product design flows, including examples of recent model-based pre-silicon verification techniques. Finally we summarize with a proposed flow and key considerations for MBPDR implementation.
As lithography continues to increase in difficulty with low k1 factors, and ever-tighter process margins, model-based optical proximity correction (OPC) is being used for the majority of patterning layers. As a result, the engineering effort consumed by the development and calibration of OPC models is continuing to increase at an alarming rate. One of the major focal points of this effort is the increasing emphasis on improving the accuracy of the model-based OPC corrections. One of the major contributors to final OPC accuracy is the quality of the resist model. As a result of these trends, the number of sample points used to calibrate OPC models is increasing rapidly from generation to generation. However, this increase is largely due to an antiquated approach to the construction of these calibration sets, focusing on structure variations. In this study, a new approach to the calibration of a resist model will be proposed based upon the location of calibration structures within the actual resist space over which the resist model is expected to be predictive.
In the context of 65nm logic technology where gate CD control budget requirements are below 5nm, it is mandatory to properly quantify the impact of the 2D effects on the electrical behavior of the transistor [1,2]. This study uses the following sequence to estimate the impact on transistor performance:
1) A lithographic simulation is performed after OPC (Optical Proximity Correction) of active and poly using a calibrated model at best conditions. Some extrapolation of this model can also be used to assess marginalities due to process window (focus, dose, mask errors, and overlay). In our case study, we mainly checked the poly to active misalignment effects.
2) Electrical behavior of the transistor (Ion, Ioff, Vt) is calculated based on a derivative spice model using the simulated image of the gate as an input. In most of the cases Ion analysis, rather than Vt or leakage, gives sufficient information for patterning optimization. We have demonstrated the benefit of this approach with two different examples:
-design rule trade-off : we estimated the impact with and without misalignment of critical rules like poly corner to active distance, active corner to poly distance or minimum space between small transistor and big transistor.
-Library standard cell debugging: we applied this methodology to the most critical one hundred transistors of our standard cell libraries and calculate Ion behavior with and without misalignment between active and poly. We compared two scanner illumination modes and two OPC versions based on the behavior of the one hundred transistors. We were able to see the benefits of one illumination, and also the improvement in the OPC maturity.
For the past several technology nodes, switching from spin-on organic Bottom Anti-Reflective Coatings (BARCs) to CVD organic BARCs has been proposed as the optimal solution for critical photolithography processes. However, spin-on BARC film stacks have still have widespread adoption for a variety of reasons. Despite the continuous improvement in lithographic techniques, the current challenges for 65nm (half pitch) process integration demand that critical photo processes sacrifice significant pattern collapse margin to maintain high aspect ratios. In the mean time, pressure on CD control has also continued to increase. As a result of these trends, the choice and the optimization of hard mask and antireflective solutions are a critical area of process development.
This paper presents an update on the tradeoffs between spin-on organic BARCs and CVD organic integrations when applied to 65nm gate patterning constraints. The proposed Carbon containing CVD stack has shown great advantages in term of reflectivity control and in term of pattern collapse margin leading to an overall improved lithographic process window. On the other hand, satisfactory critical dimensions, without organic BARC, were seen when studying parameters such as, line width roughness (LWR), profiles and rework impact. These statements have also been assessed with some promising etch and electrical results.
This paper details a study undertaken to revisit defect specifications and maskshop metrology calibration for a mature lithographic process. A programmed array was created containing darkfield and brightfield feature types at various pitches with appropriate OPC sizing. Defects were systematically added to the layout with differing sizes and spacing from the main feature. After exposure with production illumination settings, resist image data was collected and used to determine critical defect sizes. These results are correlated with typical maskshop metrology methods such as AIMS, AVI Photomask Defect Metrology Software (PDMS), and CDSEM. In some cases, it is shown that AIMS data correlates poorly with both defect size and spacing from the feature edge when using illumination settings nominally matched to the exposure tool. Finally, for the particular processes reviewed in this study, the results indicate that the initial reticle defect specifications are often too aggressive for the finalized production lithographic process.
Scattered light in optical lithography, also known as flare, has been shown to cause potentially significant linewidth variation at low-k1 values. The interaction radius of this effect can extend essentially from zero to the full range of a product die and beyond. Because of this large interaction radius the correction of the effect can be very computation-intensive. In this paper, we will present the results of our work to characterize the flare effect for 65nm and 90nm poly processes, model that flare effect as a summation of gaussian convolution kernels, and correct it within a hierarchical model based OPC engine. Novel methods for model based correction of the flare effect, which preserve much of the design hierarchy, is discussed. The same technique has demonstrated the ability to correct for long-range loading effects encountered during the manufacture of reticles.
In this paper, we present a new technique (Critical Failure ORC or CF-ORC) to check the robustness of the structures created by OPC through the process window. The full methodology is explained and tested on a full chip at the 90- nm node. Improvements compared to standard ORC/MRC techniques will be presented on complex geometries. Finally, examples of concrete failure predictions are given and compared to experimental results.
Mask error factor (MEEF) is a commonly used metric in lithography. This parameter gives a good indication of the impact of intra-mask CD variation on the wafer. Unfortunately, MEEF is useless to anticipate the CD variation on the wafer induced by Mask Mean-To-Target variation (MMT). Currently, MMT error is compensated by adjusting the exposure dose. This paper presents the concept of MEV (MEEF Energy-latitude Variation) which is defined by the equation δCDwafer=MEV *δMMT after the dose compensation in a similar way to the MEEF concept. A simple expression for MEV will be presented which shows that the MEV factor is proportional to the variation of the product of EL*MEEF through the population. Using 65nm logic gate level, MEV experimentally shown to be non-zero, and roughly ½ times MEEF factor, which is of course non-negligible in sub 100nm regime. Based on aerial image simulation, pure optical effects are responsible for about 40% of the MEV, which gives a slight predominance of the resist part. Finally, the possibility of reducing the MEV factor by compensating for MTT variation not only by dose but also by illumination settings change is discussed. This will give the basis for an Advanced Process Control (APC) algorithm for the future generations.
To follow the accelerating ITRS roadmap, microprocessor and DRAM manufacturers are on their way to introduce the alternating phase shift mask (APSM) to be able to print the gate level on sub-130-nm devices. This is done at very high mask costs, long cycle times, and poor guarantees to get defect-free masks. Nakao et al. have proposed a new resolution enhancement technique (RET). They have shown that sub-0.1-µm features could be printed with good process latitudes using a double binary mask printing technique. This solution is very interesting, but is applicable to isolated structures only. To overcome this limitation, we have developed an extension of this technique called complementary double exposure (CODE). It combines Nakao's technique and the use of assist features that are removed during a second subsequent exposure. This new method enables us to print isolated as well as dense features on advanced devices using two binary masks. We describe all the steps required to develop the CODE application. The layout rules generation and the impact of the second mask on the process latitude have been studied. Experimental verification has been done using 193-nm 0.63 and 0.75 numerical aperture (NA) scanners. The improvement brought by quadrupole or annular illuminations combined with CODE has also been evaluated. Finally, the results of the CODE technique, applied to a portion of a real circuit using all the developed rules, are shown.
In a previous paper, we have proposed the CODE (Complementary Double Exposure) technique. A new manufacturable Reticle Enhancement Technique (RET) using two binary masks. We have demonstrated the printability of 80nm dense (300nm pitch), semi-dense and isolated lines using the CODE technique and showed good printing results using a 0.63NA ArF scanner. In a more recent article we described all the steps required to develop the CODE application: the binary decomposition and the solutions developed in order to compensate adequately for line end shortening. This study was done based on aerial image simulations only. In this paper, we will give experimental results for printing complex two-dimensional structures for the high performance version of a 90nm ground rule, 240nm minimal pitch process, using the CODE technique. The results of depth of focus (DOF), energy latitude (EL) and mask error enhancement factor (MEEF) through pitch, and end-cap correction will be discussed, for quadrupole and annular illumination using a 193nm 0.70NA exposure tool. The CODE technique, not only because of a lower cost but also because of its performance, could be a good alternative to the alternating PSM technique, having less design penalties and a better mask making cycle time.
xIn order to address some specific issues related to gate level printing of the 0.09μm logic process, the following mask and illumination solutions have been evaluated. Annular and Quasar illumination using binary mask with assist feature and the CODE (Complementary Double Exposure) technique. Two different linewidths have been targeted after lithography: 100nm and 80nm respectively for lowpower and high-speed applications. The different solutions have been compared for their printing performance through pitch for Energy Latitude, Depth of Focus and Mask Error Enhancement Factor. The assist bar printability and line-end control was also determined. For printing the 100nm target, all tested options can be used, with a preference for Quasar illumination for the gain in Depth of Focus and
MEEF. For the 80nm target however, only the CODE technique with Quasar give sufficient good results for the critical litho parameters.
A capable process fulfills many requirements on e.g. depth of focus, exposure latitude, and mask error factor. This makes a full optimization complicated. Traditionally only a few parameters are included in the optimization routine, such as the focus-dose process window, while other parameters like the (NA,σ ) illumination conditions are fixed at a specified value. In this paper we present an analytical model for describing the effect of variations in dose, focus and mask CD. We optimize the overall CD distribution, both the target value and the CD variation, taking the statistical variations of focus, dose and mask line width variations into account. The improved CD control is measured quantitatively, using the well-known process capability index (Cpk). The results are compared to traditional optimization schemes and brute force Monte Carlo simulations. Process latitudes can be better optimized while calculating the OPC curve. This is achieved by tuning the mask corrections to the process variations and simultaneously optimizing the global mask bias. Furthermore, the optimization method enables a trade off between mask error and process control. Simulated aerial image data is used to determine the optimum mask bias and illumination condition for different levels of process variation, including mask CD variation. The effect of optimizing the global mask bias is calculated. Finally, the results will be compared to experimental data for a number of illumination settings.
In a recent paper, we proposed a new manufacturable Reticule Enhancement Technique (RET) using two binary masks, called CODE (Complementary Double Exposure). We demonstrated the printability of 80nm dense (300nm pitch), semi-dense and isolated lines using this technique and showed good performance using an ArF 0.63NA scanner. To be able to use the CODE RET in production, we must be able to handle complex two-dimensional structures as well. In this paper we study the representative two-dimensional complex structures of a circuit in order to have a complete overview of this technique. We analyze the impact of the asymmetrical apertures and the impact of the 2nd mask overlap to the 1st mask. We show that asymmetrical apertures impact the line width of the non-critical lines. We also show that the 2nd mask has not only the role of protecting the exposed part. It also contributes strongly to the printability of the complex structures by correcting the defects of the 1st exposure. Finally, we show the results of CODE technique applied to a portion of a real circuit using all the developed rules.
To follow the accelerating ITRS roadmap, microprocessor and DRAM manufacturers have introduced the Alternating Phase shift mask (Alt.PSM) resolution enhancement technique (RET) in order to be able to print the gate level on sub 130nm devices. This is done at very high mask costs, a long cycle time and poor guarantee to get defect free masks. S. Nakao has proposed a new RET. He showed that sub 0.1um features could be printed with good process latitudes using a double binary mask printing technique. This solution is very interesting, but is applicable to isolated structures only. To overcome this limitation, we have developed an extension to this technique called CODE. This combines Nakao's technique and the use of assist features removed in a second subsequent exposure. This new solution enables us to print isolated as well as dense features on advanced devices using two binary masks. This paper will describe all the steps required to develop the CODE application. (1) Determination of the optimal optical settings, (2) Determination of optimal assist feature size and placement, (3) Layout rules generation, (4)Application of the layout rules to a complex layout, using the Mentor Graphics Calibre environment, (5) Experimental verification using a 193nm 0.63NA scanner.
The insertion point for the first scattering bar is a key point in the development of a process using assist features, because this semi dense feature will determine the overall depth of focus of the process. A study of the parameters, which influence the choice of this insertion point, has been performed using a 0.63 NA 193 nm scanner for a 100 nm CD target after litho. The impact of the scattering bar on: Depth of Focus, Energy Latitude, Mask Error Enhancement Factor, printability, and the effect of scattering bar line width variation on main feature described by a parameter called AFMEEF will be discussed in this paper. The optimal insertion point for the first scattering bar will strongly depend on the litho-graphic process and the mask parameters. A model is proposed to determine the optimal insertion point, as function of the dose, focus budget, minimal allowed scatterbar width, and mask CD dispersion for both scattering bars and main features.
To process 0.13 micrometers designs and below, a new data processing flow has been implemented at STMicroelectronics Crolles based on the Mentor Graphics suite. To deal more easily with model-based corrections and additional verifications on critical layers a separation of the design database in critical and non-critical layers has been introduced. The resist model and the correction parameters are developed in an iterative way. File sizes and data processing time are the main issues in the mask data preparation. The impact on mask manufacturing has been also illustrated in this paper.
The first cost effective solution, to achieve a 100 nm gate with a 300 nm pitch, for ASICS manufacturing, is to validate a 193 nm technology using binary masks and weak OPC. This allows us to have zero defects mask with a relative short cycle time. In order to determine and minimize the CD dispersion resulting from the mask making process for ArF lithography, the following sources of errors have been studied: (1) Mask CD dispersion: the effect of CD dispersion was analyzed for different mask making processes (combinations of raster optical, raster e-beam and shape beam writers and dry and wet etch). Shape beam in combination with dry etch showed the best results in this study. CD dispersion at 1x of 3 nm is observed. (2) MEEF: the MEEF was determined using different methods and found to be 1.6 for a 300 nm pitch at 193 nm and NA equals 0.63/sigma equals 0.8. This value can be further improved when using quadrupole illumination or a higher NA. (3) Linearity and proximity effects on mask: the shape beam process shows better linearity and less proximity effects as compared to a raster tool based process. Without OPC correction, this difference is very important. The choice of the writing tool is less important with respect to proximity and linearity effects when using a model based OPC approach, since the effects are more or less systematic and can be compensated for. (4) Effect of quartz transmission at 193 nm: transmission variation at 193 nm of standard 248 nm quartz blanks is around three times higher than at 248 nm. This leads to a 3 nm CD variation, which is not negligible considering the 20 nm budget. A new type of blank is required. To achieve a 100 nm gate printing capability for low volume ASIC production a good understanding and control of all the steps in the mask process are needed. Furthermore, even if all these steps are well controlled, the total mask CD budget is still larger today than the budget indicated by the ITRS roadmap; 35% versus 30%.
Flare is known to be responsible for a contrast lost and a process latitude reduction. Another undesirable effect is the flare variation, which induces linewidth variation. For a stepper, this is mainly an intrafield effect. In the same way, the main contribution of flare variation comes also from the across field flare variation (AFFV). In comparison the contribution to across wafer flare variation is weak. Using a scanner, AFFV and flare mean for an isolated field has been reduced by a factor of two. Unfortunately, stray light variation across the wafer has increased, but AWFV and flare mean with adjacent field has not dropped significantly. In this paper, the averaged flare, AFFV and AWFV will be compared on a 248 nm stepper ASM/300, a scanner ASM/500 and 193 nm scanner ASM/900. Different parameters such as field size, bottom anti reflective coating, adjacent field and exposure at the edge of the wafer will be analyzed on mean flare value, AFFV and AWFV. An averaged flare for isolate field and AFFV improvement has been observed for the scanner. However, flare impact needs to be carefully considered because AWFV and flare mean with adjacent field is still not negligible. Flare value seems also to drop significantly with the wavelength change, but more experiments need to be done on this non mature technology.
Laurent Pain, Yorick Trouiller, Alexandra Barberet, O. Guirimand, Gilles Fanget, N. Martin, Yves Quere, M. Nier, Emile Lajoinie, Didier Louis, Michel Heitzmann, P. Scheiblin, A. Toffoli
193 nm lithography is expected today to be an emerging solution for the development and the production of future integrated circuits based on sub 150 nm design rules. However the characterization and the evaluation of these tools require a lot of effort due to the 193 nm resist behavior during SEM observations. This paper presents the process flow chart to allow the evaluation of a ASM-L 5500/900 193 nm scanner by electrical measurement and the stack used for this study. After the validation of this flow chart, this work gives an overview of the ASM-L 5500/900 performances.
The goal of this paper is to understand the optical phenomena at dielectric levels (contact, local interconnect, via and damascene line levels). The purpose is also to quantify the impact of dielectric and resist thickness variations on the CD range with and without Bottom Anti Reflective Coating (BARC). First we will show how all dielectric levels can be reduced to the stack metal/oxide/BARC/resist, and what are the contributions to resist and dielectric thickness range for each levels. Then a simple model will be developed to understand CD variation in this stack: by extending the Perot-Fabry model to the dielectric levels, developed by Brunner for the gate level, we can obtain a simple relation between the CD variation and all parameters (metal, oxide thickness, resist thickness, BARC absorbency). Experimentally CD variations for damascene line level on 0.18 micrometers technology has been measured depending on oxide thickness and resist thickness and can confirm this model. UV5 resist, AR2 BARC from Shipley and Top ARC from JSR have been used for these experiments.
The goal of this paper is to understand the optical phenomena at dielectric levels. The purpose is also to quantify the impact of dielectric and resist thickness variations on the CD range with and without Bottom Anti Reflective COating (BARC). First we will show how all dielectric levels can be reduced to the stack metal/oxide/BARC/resist, and what are the contributions to resists and dielectric thickness range for each levels. Then a simple model will be developed to understand CD variation in this tack: by extending the Perot/Fabry model to the dielectric levels, developed by Brunner for the gate level, we can obtain a simple relation between the CD variation and all parameters. Experimentally CD variation for Damascene line level on 0.18micrometers technology has been measured depending on oxide thickness and resist thickness and can confirm this model. UV5 resist, AR2 BARC from Shipley and Top ARC from JSR have been used for these experiments. The main conclusions are: (1) Depending on your dielectric deposition and CMP processes, if resist thickness is controlled, a standard BARC process used for the gate is adapted to remove oxide thickness variation influence providing the optimized resist thickness is used. (2) If both resist thickness and dielectric thickness are uncontrolled, a more absorbent BARC is required.
BARC technology, originally developed for gate level has now to be applied to interconnection one's. Requirements for dielectric interconnection levels are different from gate level. In the case of gate level ARC has to minimize reflectivity at resist/substrate interface due to notching and resist swing curve effects. Whereas ARC for interconnections has to minimize reflectivity variation at resist/substrate interface due to swing curve effect in the dielectric layer. For interconnections, ARC must be as absorbent as possible at stepper exposure wavelength, and two ways are foreseen: ARC layer with high k value at 248 nm, and ARC layer with high thickness. For a reflectivity variation minimum criteria, we can find a couple values (k, minimum thickness). Experiments give us for a reflectivity variation below 5% the following couples: (k equals 0.7, 1200 Angstrom thickness) and (k equals 1.1, 850 Angstrom). In this paper we describe different applications of SiOxNy for interconnection levels: via, contact and damascene line level. Improvements depending of the SiOxNy thickness are seen in CD dispersion. To conclude SiOxNy ARC can be used for interconnection levels, and its performances depends on ARC couple values (k, thickness).
The phase shift mask (PSM) is a key emerging technology thought to be extending 248 nm lithography. In this paper, we describe the lithographic performances of Shipley UV5 photoresist on SiOxNy Bottom Anti Reflective coating (BARC), using alternating PSM and ASM/90 Deep-UV stepper. Results on 0.18 micrometer design rules are presented: lithographic performances, comparison between PSM and binary mask, sub 0.18 micrometer performances ({1}) and the ultimate resolution of this technology are reported. To conclude we demonstrated the 0.18 micrometer lithography feasibility with alternating mask and KrF stepper, and showed that all the necessary tools are today available to achieve such goals.
BARC technology, originally developed for gate level has now to be applied to interconnection one's. Requirements for dielectric interconnection levels are different from gate level. In the case of gate level ARC has to minimize reflectivity at resist/substrate interface due to notching and resist swing curve effects. Whereas ARC for interconnections has to minimize reflectivity variation at resist/substrate interface due to swing curve effect in the dielectric layer. For interconnection, ARC must be as absorbent as possible at stepper exposure wavelength, and two ways are foreseen: ARC layer with high k value at 248 nm, and ARC layer with high thickness. For a reflectivity variation minimum criteria, we can find a couple values (k, minimum thickness). Experiments give us for a reflectivity variation below 5% the following couples: (k equals 0.7, 1200 angstroms thickness) and (k equals 1.1, 850 angstroms). In this paper we describe different applications of SiOxNy for interconnection levels: via, contact and damascene line level. Improvements depending of the SiOxNy thickness are seen in CD dispersion. To conclude SiOxNy ARC can be used for interconnection levels, and its performance depends on ARC couple values (k, thickness).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.