Layout-pattern-based approaches for physical design analysis and verification have become mainstream in recent years and are enabling many new applications. Prior work introduced the ability to collect all patterns from multiple layouts into a catalog as well as to use machine learning techniques to score and filter patterns to identify which ones are critical. In this paper, data mined from a library of scored patterns from established designs is applied to the analysis of diagnosis results from a new design to improve defect root cause analysis (RCA).
The flow for this approach is as follows: patterns interacting with nets reported in diagnosis callouts are selected as patterns of interest (POIs) from the catalog of all patterns. Next, features of interest (FOIs) are extracted from all POIs to build a dataframe. Finally, volume diagnosis results identifying nets with likely open or short defects are added to the dataframe. RCA is performed using the dataframe to identify likely root cause(s) for failures and suggest refined failure locations for targeted inspection, physical failure analysis, or other electrical failure analysis.
The approach described above is applied to products in high-volume manufacturing using a leading-edge technology node. Silicon validation results will be included for example applications.
Building on previous work for cataloging unique topological patterns in an integrated circuit physical design, a new process is defined in which a risk scoring methodology is used to rank patterns based on manufacturing risk. Patterns with high risk are then mapped to functionally equivalent patterns with lower risk. The higher risk patterns are then replaced in the design with their lower risk equivalents. The pattern selection and replacement is fully automated and suitable for use for full-chip designs. Results from 14nm product designs show that the approach can identify and replace risk patterns with quantifiable positive impact on the risk score distribution after replacement.
Via failure has always been a significant yield detractor caused by random and systematic defects. Introducing redundant vias or via bars into the design can alleviate the problem significantly [1] and has, therefore, become a standard DFM procedure [2]. Applying rule-based via bar insertion to convert millions of via squares to via bar rectangles, in all possible places where enough room could be predicted, is an efficient methodology to maximize the redundancy rate. However, inserting via bars can result in lithography hotspots. A Pattern Manufacturability (PATMAN) model is proposed, to maximize the Redundant Via Insertion (RVI) rate in a reasonable runtime, while insuring lithography friendly insertion based on the accumulated DFM learnings during the yield ramp.
Topological pattern-based methods for analyzing IC physical design complexity and scoring resulting patterns to
identify risky patterns have emerged as powerful tools for identifying important trends and comparing different designs.
In this paper, previous work is extended to include analysis of layouts designed for the 7nm technology generation. A
comparison of pattern complexity trends with respect to previous generations is made. In addition to identifying
topological patterns that are unique to a particular design, novel techniques are proposed for scoring those patterns based
on potential yield risk factors to find patterns that pose the highest risk.
In this paper, we introduce a fast and reasonably accurate methodology to determine patterning difficulty based on the fundamentals of optical image processing techniques to analyze the frequency content of design shapes which determines patterning difficulties via a computational patterning transfer function. In addition, with the help of Monte- Carlo random pattern generator, we use this flow to identify a set of difficult patterns that can be used to evaluate the design ease-of-manufacturability via a scoring methodology as well as to help with the optimization phases of post-tape out flows. This flow offers the combined merits of scoring-based criteria and model-based approach for early designs. The value of this approach is that it provides designers with early prediction of potential problems even before the rigorous model-based DFM kits are developed. Moreover, the flow establishes a bi-directional platform for interaction between the design and the manufacturing communities based on geometrical patterns.
We continue to study the feasibility of using Directed Self Assembly (DSA) in extending optical lithography for High
Volume Manufacturing (HVM). We built test masks based on the mask datatprep flow we proposed in our prior year’s
publication [1]. Experimental data on circuit-relevant fin and via patterns based on 193nm graphoepitaxial DSA are
demonstrated on 300mm wafers. With this computational lithography (CL) flow we further investigate the basic
requirements for full-field capable DSA lithography. The first issue is on DSA-specific defects which can be either
random defects due to material properties or the systematic DSA defects that are mainly induced by the variations of the
guiding patterns (GP) in 3 dimensions. We focus in studying the latter one. The second issue is the availability of fast
DSA models to meet the full-chip capability requirements in different CL component’s need. We further developed
different model formulations that constitute the whole spectrum of models in the DSA CL flow. In addition to the
Molecular Dynamic/Monte Carlo (MD/MC) model and the compact models we discussed before [2], we implement a 2D
phenomenological phase field model by solving the Cahn-Hilliard type of equation that provide a model that is more
predictive than compact model but much faster then the physics-based MC model. However simplifying the model might
lose the accuracy in prediction especially in the z direction so a critical question emerged: Can a 2D model be useful fro
full field? Using 2D and 3D simulations on a few typical constructs we illustrate that a combination of 2D mode with
pre-characterized 3D litho metrics might be able to approximate the prediction of 3D models to satisfy the full chip
runtime requirement. Finally we conclude with the special attentions we have to pay in the implementation of 193nm
based lithography process using DSA.
As the demand for taking Source Mask Optimization (SMO) technology to the full-chip level is increasing, the
development of a flow that overcomes the limitations which hinder this technology's moving forward to the
production level is a priority for Litho-Engineers.
The aim of this work is to discuss advantages of using a comprehensive novel SMO flow that outperforms
conventional techniques in areas of high capacity simulations, resist modeling and the production of a final
manufacturable mask. We show results that indicate the importance of adding large number of patterns to the SMO
exploration space, as well as taking into account resist effects during the optimization process and how this flow
incorporates the final mask as a production solution.
The high capacity of this flow increases the number of patterns and their area by a factor of 10 compared to other
SMO techniques. The average process variability band is improved up to 30% compared to the traditional
lithography flows.
Source-mask optimization (SMO) in optical lithography has in recent years been the subject of increased
exploration as an enabler of 22/20nm and beyond technology nodes [1-6]. It has been shown that intensive
optimization of the fundamental degrees of freedom in the optical system allows for the creation of non-intuitive
solutions in both the source and mask, which yields improved lithographic performance. This paper
will demonstrate the value of SMO software in resolution enhancement techniques (RETs). Major benefits
of SMO include improved through-pitch performance, the possibility of avoiding double exposure, and
superior performance on two dimensional (2D) features. The benefits from only optimized source, only
optimized mask, and both source and mask optimized together will be demonstrated. Furthermore, we
leverage the benefits from intensively optimized masks to solve large array problems in memory use models
(MUMs). Mask synthesis and data prep flows were developed to incorporate the usage of SMO, including
both RETs and MUMs, in several critical layers during 22/20nm technology node development.
Experimental assessment will be presented to demonstrate the benefits achieved by using SMO during
22/20nm node development.
In recent years the potential of Source-Mask Optimization (SMO) as an enabling technology for 22nm-and-beyond lithography
has been explored and documented in the literature.1-5 It has been shown that intensive optimization of the fundamental
degrees of freedom in the optical system allows for the creation of non-intuitive solutions in both the mask and the
source, which leads to improved lithographic performance. These efforts have driven the need for improved controllability
in illumination5-7 and have pushed the required optimization performance of mask design.8, 9 This paper will present recent
experimental evidence of the performance advantage gained by intensive optimization, and enabling technologies like pixelated
illumination. Controllable pixelated illumination opens up new regimes in control of proximity effects,1, 6, 7 and we
will show corresponding examples of improved through-pitch performance in 22nm Resolution Enhancement Technique
(RET). Simulation results will back-up the experimental results and detail the ability of SMO to drive exposure-count reduction,
as well as a reduction in process variation due to critical factors such as Line Edge Roughness (LER), Mask Error
Enhancement Factor (MEEF), and the Electromagnetic Field (EMF) effect. The benefits of running intensive optimization
with both source and mask variables jointly has been previously discussed.1-3 This paper will build on these results by
demonstrating large-scale jointly-optimized source/mask solutions and their impact on design-rule enumerated designs.
As the IC Industry moves towards 32nm technology node and below, it becomes important to study the impact of
process window variations on yield. PVBands is a technique to express process parameter variations such as dose, focus,
mask size, etc. However, PVBands width and area ratio alone are insufficient as a quantitative measure for judging the
PVBand performance, as it does not take into consideration how far away the contours are from the target.
In this paper, a novel mathematical formulation is developed to better judge the PVBands performance. It expresses the
PVBand width and symmetry with respect to the target through a single score. This score can be used in OPC (Optical
Proximity Correction) iterations instead of working with the nominal EPE (Edge Placement Error). Not only does this
approach provide a better measure of the PVBands performance through the value of the score, but it also presents a
straightforward method for PWOPC optimization by using the PV Score directly in the iterations.
We demonstrate experimentally for the first time the feasibility of applying SMO technology using pixelated illumination. Wafer images of SRAM contact holes were obtained to confirm the feasibility of using SMO for 22nm node lithography. There are still challenges in other areas of SMO integration such as mask build, mask inspection and repair, process modeling, full chip design issues and pixelated illumination, which is the emphasis in this paper. In this first attempt we successfully designed a manufacturable pixelated source and had it fabricated and installed in an exposure tool. The printing result is satisfactory, although there are still some deviations of the wafer image from simulation prediction. Further experiment and modeling of the impact of errors in source design and manufacturing will proceed in more detail. We believe that by tightening all kind of specification and optimizing all procedures will make pixelated illumination a viable technology for 22nm or beyond.
Publisher's Note: The author listing for this paper has been updated to include Carsten Russ. The PDF has been updated to reflect this change.
Device extraction and the quality of device extraction is becoming of increasing concern for integrated
circuit design flow. As circuits become more complicated with concomitant reductions in geometry, the
design engineer faces the ever burgeoning demand of accurate device extraction. For technology nodes of
65nm and below approximation of extracting the device geometry drawn in the design layout
polygons might not be sufficient to describe the actual electrical behavior for these devices, therefore
contours from lithographic simulations need to be considered for more accurate results.
Process window variations have a considerable effect on the shape of the device wafer contour, having an
accurate method to extract device parameters from wafer contours would still need to know which
lithographic condition to simulate. Many questions can be raised here like: Are contours that represent the
best lithography conditions just enough? Is there a need to consider also process variations? How do we
include them in the extraction algorithm?
In this paper we first present the method of extracting the devices from layout coupled with lithographic simulations. Afterwards a complete flow for circuit time/power analysis using lithographic contours is described. Comparisons between timing results from the conventional LVS method and Litho aware method are done to show the importance of litho contours considerations.
Cutting edge technology node manufacturers are always researching how to increase yield while still optimally using
silicon wafer area, this way these technologies will appeal more to designers. Many problems arise with such
requirements, most important is the failure of plain layout geometric checks to capture yield limiting features in designs,
if these features are recognized at an early stage of design, it can save a lot of efforts at the fabrication end. A new trend
of verification is to couple geometric checks with lithography simulations at the designer space.
A lithography process has critical parameters that control the quality of its resulting output. Unfortunately some of these
parameters can not be kept constant during the exposure process, and the variability of these parameters should be taken
into consideration during the lithography simulations, and the lithography simulations are performed multiple times with
these variables set at the different values they can have during the actual process. This significantly affects the runtime
for verification.
In this paper the authors are presenting a methodology to carefully select only needed values for varying lithography
parameters; that would capture the process variations and improve runtime due to reduced simulations. The selected
values depend on the desired variation for each parameter considered in the simulations. The method is implemented as
a tool for qualification of different design techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.