Joint optimization (JO) of source and mask together is known to produce better SMO solutions than sequential
optimization of the source and the mask. However, large scale JO problems are very difficult to solve because the global
impact of the source variables causes an enormous number of mask variables to be coupled together. This work presents
innovation that minimize this runtime bottleneck. The proposed SMO parallelization algorithm allows separate mask
regions to be processed efficiently across multiple CPUs in a high performance computing (HPC) environment, despite
the fact that a truly joint optimization is being carried out with source variables that interact across the entire mask.
Building on this engine a progressive deletion (PD) method was developed that can directly compute "binding
constructs" for the optimization, i.e. our method can essentially determine the particular feature content which limits the
process window attainable by the optimum source. This method allows us to minimize the uncertainty inherent to
different clustering/ranking methods in seeking an overall optimum source that results from the use of heuristic metrics.
An objective benchmarking of the effectiveness of different pattern sampling methods was performed during postoptimization
analysis. The PD serves as a golden standard for us to develop optimum pattern clustering/ranking
algorithms. With this work, it is shown that it is not necessary to exhaustively optimize the entire mask together with the
source in order to identify these binding clips. If the number of clips to be optimized exceeds the practical limit of the
parallel SMO engine one can starts with a pattern selection step to achieve high clip count compression before SMO.
With this LSSO capability one can address the challenging problem of layout-specific design, or improve the technology
source as cell layouts and sample layouts replace lithography test structures in the development cycle.
Source-mask optimization (SMO) in optical lithography has in recent years been the subject of increased
exploration as an enabler of 22/20nm and beyond technology nodes [1-6]. It has been shown that intensive
optimization of the fundamental degrees of freedom in the optical system allows for the creation of non-intuitive
solutions in both the source and mask, which yields improved lithographic performance. This paper
will demonstrate the value of SMO software in resolution enhancement techniques (RETs). Major benefits
of SMO include improved through-pitch performance, the possibility of avoiding double exposure, and
superior performance on two dimensional (2D) features. The benefits from only optimized source, only
optimized mask, and both source and mask optimized together will be demonstrated. Furthermore, we
leverage the benefits from intensively optimized masks to solve large array problems in memory use models
(MUMs). Mask synthesis and data prep flows were developed to incorporate the usage of SMO, including
both RETs and MUMs, in several critical layers during 22/20nm technology node development.
Experimental assessment will be presented to demonstrate the benefits achieved by using SMO during
22/20nm node development.
In recent years the potential of Source-Mask Optimization (SMO) as an enabling technology for 22nm-and-beyond lithography
has been explored and documented in the literature.1-5 It has been shown that intensive optimization of the fundamental
degrees of freedom in the optical system allows for the creation of non-intuitive solutions in both the mask and the
source, which leads to improved lithographic performance. These efforts have driven the need for improved controllability
in illumination5-7 and have pushed the required optimization performance of mask design.8, 9 This paper will present recent
experimental evidence of the performance advantage gained by intensive optimization, and enabling technologies like pixelated
illumination. Controllable pixelated illumination opens up new regimes in control of proximity effects,1, 6, 7 and we
will show corresponding examples of improved through-pitch performance in 22nm Resolution Enhancement Technique
(RET). Simulation results will back-up the experimental results and detail the ability of SMO to drive exposure-count reduction,
as well as a reduction in process variation due to critical factors such as Line Edge Roughness (LER), Mask Error
Enhancement Factor (MEEF), and the Electromagnetic Field (EMF) effect. The benefits of running intensive optimization
with both source and mask variables jointly has been previously discussed.1-3 This paper will build on these results by
demonstrating large-scale jointly-optimized source/mask solutions and their impact on design-rule enumerated designs.
Near-field interference lithography is a promising variant of multiple patterning in semiconductor device fabrication
that can potentially extend lithographic resolution beyond the current materials-based restrictions on the
Rayleigh resolution of projection systems. With H2O as the immersion medium, non-evanescent propagation
and optical design margins limit achievable pitch to approximately 0.53λ/nH2O = 0.37λ. Non-evanescent images
are constrained only by the comparatively large resist indices (typically1.7) to a pitch resolution of 0.5/nresist
(typically 0.29). Near-field patterning can potentially exploit evanescent waves and thus achieve higher spatial
resolutions. Customized near-field images can be achieved through the modulation of an incoming wavefront
by what is essentially an in-situ hologram that has been formed in an upper layer during an initial patterned
exposure. Contrast Enhancement Layer (CEL) techniques and Talbot near-field interferometry can be considered
special cases of this approach.
Since the technique relies on near-field interference effects to produce the required pattern on the resist, the
shape of the grating and the design of the film stack play a significant role on the outcome. As a result, it is
necessary to resort to full diffraction computations to properly simulate and optimize this process.
The next logical advance for this technology is to systematically design the hologram and the incident wavefront
which is generated from a reduction mask. This task is naturally posed as an optimization problem, where
the goal is to find the set of geometric and incident wavefront parameters that yields the closest fit to a desired
pattern in the resist. As the pattern becomes more complex, the number of design parameters grows, and the
computational problem becomes intractable (particularly in three-dimensions) without the use of advanced numerical
techniques. To treat this problem effectively, specialized numerical methods have been developed. First,
gradient-based optimization techniques are used to accelerate convergence to an optimal design. To compute
derivatives of the parameters, an adjoint-based method was developed. Using the adjoint technique, only two
electromagnetic problems need to be solved per iteration to evaluate the cost function and all the components
of the gradient vector, independent of the number of parameters in the design.
We demonstrate experimentally for the first time the feasibility of applying SMO technology using pixelated illumination. Wafer images of SRAM contact holes were obtained to confirm the feasibility of using SMO for 22nm node lithography. There are still challenges in other areas of SMO integration such as mask build, mask inspection and repair, process modeling, full chip design issues and pixelated illumination, which is the emphasis in this paper. In this first attempt we successfully designed a manufacturable pixelated source and had it fabricated and installed in an exposure tool. The printing result is satisfactory, although there are still some deviations of the wafer image from simulation prediction. Further experiment and modeling of the impact of errors in source design and manufacturing will proceed in more detail. We believe that by tightening all kind of specification and optimizing all procedures will make pixelated illumination a viable technology for 22nm or beyond.
Publisher's Note: The author listing for this paper has been updated to include Carsten Russ. The PDF has been updated to reflect this change.
Traditional OPC is essentially an iterated feedback process, in which the position of each target edge is corrected by
adjusting a controlling mask edge. However, true optimization adjusts the mask variables collectively, and in so-called
SMO approaches (for Source Mask Optimization) the source variables are adjusted as well. Optimized masks often have
high edge density if synthesis methods are used in an effort to obtain a more global solution, and the correspondence
between individual mask edges and printed target edges becomes less clearcut than in traditionally OPC'd masks.
Restrictions on phase shift and MEEF tend to reduce this departure from traditional solutions, but they trade off the
theoretical performance advantage in dose and focus latitude that phase shift provides for a reduced sensitivity to thick
mask topography and to manufacturing error. Mask variables couple across long distances only in the indirect sense of
stitched connection across chains of neighbor-to-neighbor interactions, but source variables interact directly across entire
masks. Source+mask optimization of large areas therefore involves long-range communication across the parts of the
calculation, though the number of source variables involved is small. Tradeoffs between source structure, pattern
diversity, and design regularity are illustrated, taking into account the limited (but unknown) number of binding features
in a large layout. SMO's exploitation of complex source designs is shown to provide superior solutions to those obtained
by mask optimization alone.
Moreover, in development work the
ability to adjust the source opens up
new options in process engineering,
and these will become particularly
valuable when future exposure tools
provide greater flexibility in
programmable source control. Such
capabilities can be explored in a
preliminary way by using
programmed multi-scans to compose
optimized compound sources with
e.g. multiple poles or annular
elements.
In this paper, we will outline the approach for optimizing the illumination conditions to print three-dimensional images
in resist stacks of varying sensitivity in a single exposure. The algorithmic approach for acheiving both optimal common
and weakest window is presented. Results will be presented which demonstrate the ability of the technique to create threedimensional
structures. The performance of the common and weakest window formulation will be explored using this
approach. Additionally, due to physical restrictions there are limitations to the type of patterns that can be printed with a
single exposure in this manner, thus the abilities of such a technique will be explored.
Source optimization in optical lithography has been the subject of increased exploration in recent years [1-4], resulting in
the development of multiple techniques including global optimization of process window [4]. The performance
advantages of source optimization have been demonstrated through theory, simulation, and experiment. This paper will
emphasize global optimization of sources over multiple patterns, e.g. co-optimization of critical SRAM cells and the
critical pitches of random logic, and implement global source optimization into current resolution enhancement
techniques (RETs). The effect on optimal source due to considering multiple patterns is investigated. We demonstrate
that optimal source for limited patterns does work for a large clip of layout. Through theoretical analysis and
simulations, we explain that only critical patterns and/or critical combinations of patterns determine the final optimal
source; for example those patterns that contain constraints which are active in the solution. Furthermore, we illustrate,
through theory and simulation, that pixelated sources have better performance than generic sources and that in general it
is impossible for generic sources to construct a truly optimal solution. Sensitivity, tool matching, and lens heating issues
for pixelated sources are also discussed in this paper. Finally, we use a RETs example with wafer data to demonstrate the
benefits of global source optimization.
There is a surprising lack of clarity about the exact quantity that a lithographic source map should specify. Under the
plausible interpretation that input source maps should tabulate radiance, one will find with standard imaging codes that
simulated wafer plane source intensities appear to violate the brightness theorem. The apparent deviation (a cosine
factor in the illumination pupil) represents one of many obliquity/inclination factors involved in propagation through the
imaging system whose interpretation in the literature is often somewhat obscure, but which have become numerically
significant in today's hyper-NA OPC applications. We show that the seeming brightness distortion in the illumination
pupil arises because the customary direction-cosine gridding of this aperture yields non-uniform solid-angle subtense in
the source pixels. Once the appropriate solid angle factor is included, each entry in the source map becomes
proportional to the total |E|^2 that the associated pixel produces on the mask. This quantitative definition of lithographic
source distributions is consistent with the plane-wave spectrum approach adopted by litho simulators, in that these
simulators essentially propagate |E|^2 along the interfering diffraction orders from the mask input to the resist film. It
can be shown using either the rigorous Franz formulation of vector diffraction theory, or an angular spectrum approach,
that such an |E|^2 plane-wave weighting will provide the standard inclination factor if the source elements are incoherent
and the mask model is accurate. This inclination factor is usually derived from a classical Rayleigh-Sommerfeld
diffraction integral, and we show that the nominally discrepant inclination factors used by the various diffraction
integrals of this class can all be made to yield the same result as the Franz formula when rigorous mask simulation is
employed, and further that these cosine factors have a simple geometrical interpretation. On this basis one can then
obtain for the lens as a whole the standard mask-to-wafer obliquity factor used by litho simulators. This obliquity factor
is shown to express the brightness invariance theorem, making the simulator's output consistent with the brightness
theorem if the source map tabulates the product of radiance and pixel solid angle, as our source definition specifies. We
show by experiment that dose-to-clear data can be modeled more accurately when the correct obliquity factor is used.
Near-field imaging through plasmonic 'superlensing' layers can offer advantages of improved working distance (i.e.
introducing the equivalent of a focal length) and control over image intensity compared to simple near-field imaging. In
a photolithographic environment at ultra-violet (UV) wavelengths the imaging performance of single- and multi-layer
silver plasmonic superlenses has been studied both experimentally and via computer simulations. Super-resolution
imaging has been demonstrated experimentally, with the sub-100 nm resolution currently being limited by issues of
roughness in the silver layers and the ability to deposit high-quality silver-dielectric multilayers. The simulation studies
have shown that super-resolved imaging should be possible using surprisingly thick silver layers (>100 nm), with the
cost of much reduced image intensity, which is something that is yet to be shown experimentally. The use of multilayer
plasmonic superlenses also introduces richness to the imaging behaviour, with very high transmission possible for certain
spatial frequency components in the image. This has been widely touted as a means for improving image resolution, but
the complexity of the spatial-frequency transfer functions for these systems does not make this a universal fact for all
classes of objects. Examples of imaging situations are given where multi-layer superlenses are actually detrimental to
the image quality, such as the case of closely-separated dark-line objects on an otherwise bright background.
We provide an expanded description of the global algorithm for mask optimization introduced in our earlier papers, and discuss auxiliary optimizations that can be carried out in the problem constraints and film stack. Mask optimization tends inherently to be a problem with non-convex quadratic constraints, but for small problems we can mitigate this difficulty by exploiting specialized knowledge that applies in the lithography context. If exposure latitude is approximated as maximization of edge slope between image regions whose intensities must print with opposite polarity, we show that the solution space can be approximately divided into regions that contain at most one local minimum. Though the survey of parameter space to identify these regions requires an exhaustive grid search, this search can be accelerated using heuristics, and is not the rate-limiting step at SRAM scale or below. We recover a degree of generality by using a less simplified objective function when we actually assess the local minima. The quasi-binary specialization of lithographic targets is further exploited by searching only in the subspace formed by the dominant joint eigenvectors for dark region intensity and bright region intensity, typically reducing problem dimensionality to less than half that of the full set of frequency-domain variables (i.e. collected diffraction orders). Contrast in this subspace across the bright/dark edge will approximately reflect exposure latitude when we apply the standard fixed edge-placement constraints of lithography. However, during an exploratory stage of optimization we can define preliminary tolerances which more explicitly reflect constraints on devices, e.g. as is done with compactor codes for design migration. Our algorithm can handle vector imaging in a general way, but for the special case of unpolarized illumination and a lens having radial symmetry (but arbitrary source shape) we show that the bilinear function which describes vector interference within the film stack can be expressed in terms of three generic radial functions, enabling rapid numerical evaluation of the Hopkins kernel. By inspection these functions show that one can in principle recover classical scalar-like imaging even at high NA by exposing a very thin layer spaced above a reflective substack. The reflected image largely restores destructive interference in TM polarized fringes, if proper phasing is achieved. With an ideal reflector, the first-order azimuthal contrast loss term vanishes in all TCC components, and complete equivalence to scalar imaging is obtained in classical two-beam imaging.
A performance enhancement to planar lens lithography (PLL) through the use of i-line narrowband exposures has been investigated. Experimental results show that for a 50nm silver layer the image fidelity of narrowband exposures out performs broadband exposures. This is due to the removal of off-plasmonic-resonance wavelengths, which cause unwanted background exposure and a loss of image fidelity. Dense gratings have been resolved down to 145nm periods, as well as line-pairs down to separation distances of 117nm. These results out perform the diffraction-limits that restrict traditional optical-system resolution limits.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.