As the design node of memory device shrinks, OPC model accuracy is becoming ever more critical from development to manufacturing. To improve the model accuracy, more and more physical effects are analyzed and terms for those physical effects are added. But it is unachievable to capture the complete physical effects. In this study, deep neural network is employed and studied to improve model accuracy. Regularization is achieved using physical guidance model. To address overfitting issue, high volume of contour based edge placement (EP) gauges (>10K) are generated using fast eBeam tool (eP5) and metrology processing software (MXP) without increasing turnaround time. It is shown that the new approach improved model accuracy by >47% compared to traditional approach on >1.4K verification gauges.
Proc. SPIE. 9427, Design-Process-Technology Co-optimization for Manufacturability IX
KEYWORDS: Semiconductors, Visualization, Manufacturing, Data processing, Photomasks, Integrated circuits, Optical proximity correction, Back end of line, Front end of line, Design for manufacturability
Delivering mask ready OPC corrected data to the mask shop on-time is critical for a foundry to meet the cycle time commitment for a new product. With current OPC compute resource sharing technology, different job scheduling algorithms are possible, such as, priority based resource allocation and fair share resource allocation. In order to maximize computer cluster efficiency, minimize the cost of the data processing and deliver data on schedule, the trade-offs of each scheduling algorithm need to be understood. Using actual production jobs, each of the scheduling algorithms will be tested in a production tape-out environment. Each scheduling algorithm will be judged on its ability to deliver data on schedule and the trade-offs associated with each method will be analyzed. It is now possible to introduce advance scheduling algorithms to the OPC data processing environment to meet the goals of on-time delivery of mask ready OPC data while maximizing efficiency and reducing cost.
In today’s semiconductor industry, both the pure-play and independent device manufacturer (IDM) foundries are constantly and rigorously competing for market share. The acknowledged benefit for customers who partner with these foundries includes a reduced cost-of-ownership, along with the underwritten agreement of meeting or exceeding an aggressive time-to-market schedule. Because the Semiconductor Manufacturing International Corporation (SMIC) is one of the world-wide forerunners in the foundry industry, one of its primary concerns is ensuring continual improvement in its fab’s turnaround time (TAT), especially given that newer technology nodes and their associated processes are increasing in complexity, and consequently, in their time-to-process. In assessing current runtime data trends at the 65nm and 40nm technology nodes, it was hypothesized that hardware and software utilization improvements could accomplish a reduced overall TAT. By running an experiment using the Mentor Graphics Calibre® Cluster Manager (CalCM) software, SMIC was able to demonstrate just over a 30% aggregate TAT improvement in conjunction with a greater than 90% average utilization of all hardware resources. This paper describes the experimental setup and procedures that predicated the reported results.
In this paper, we study the problem of placement-level layout optimization for designs built from cells with unidirectional
self-aligned double patterning (SADP) metal-1 interconnect. Our goal is to minimize the number of potential
bridging hotspots in design layouts using predictive, machine learning-based models and applying incremental
placement adjustments. In the first part of the paper, we explain how to build layout pattern classification models using
machine learning methods. Our support vector machine (SVM)-based model predicts a given layout clip as either robust
or non-robust. In the second part of the paper, we apply the predictive models to placement-level optimization. Our
algorithm identifies and eliminates potential hotspots in standard cell based layout by modifying local cell position.
A persistent problem in verification flows is to eliminate waivers defined as patterns that are known to be safe on silicon
even though they are flagged by the verification recipes. The difficulty of the problem stems from the complexity of these
patterns, and thus, using a standard verification language to describe them becomes very tedious and can deliver
unexpected results. In addition, these patterns have a dynamic nature, hence, updating all production verification recipes to
waive these non critical patterns becomes more time consuming.
In this work, we are presenting a new method to eliminate waivers directly after verification recipes have been executed,
where a new rule file will be generated automatically based on the type of errors under investigation. The core of the
method is based on pattern matching to compare generated errors from verifications runs with a library of pattern waivers.
This flow will eliminate the need to edit any production recipe, and complicated coding will not be required. Finally, this
flow is compatible with most of the technology nodes.
In today's semiconductor industry, prior to wafer fabrication, it has become a desirable practice to scan layout designs
for lithography-induced defects using advanced process window simulations in conjunction with corresponding
manufacturing checks. This methodology has been proven to provide the highest level of accuracy when correlating
systematic defects found on the wafer with those identified through simulation. To date, when directly applying this
methodology at the full chip level, there has been unfavorable expenses incurred that are associated with simulation
which are currently overshadowing its primary benefit of accuracy - namely, long runtimes and the requirement for an
abundance of cpus. Considering the aforementioned, the industry has begun to lean towards a more practical application
for hotspot identification that revolves around topological pattern recognition in an attempt to sidestep the simulation
runtime. This solution can be much less costly when weighing against the negative runtime overhead of simulation. The
apparent benefits of pattern matching are, however, counterbalanced with a fundamental concern regarding detection
accuracy; topological pattern identification can only detect polygonal configurations, or some derivative of a
configuration, which have been previously identified. It is evident that both systems have their strengths and their
weaknesses, and that one system's strength is the other's weakness, and vice-versa.
A novel hotspot detection methodology that utilizes pattern matching combined with lithographic simulation will be
introduced. This system will attempt to minimize the negative aspects of both pattern matching and simulation. The
proposed methodology has a high potential to decrease the amount of processing time spent during simulation, to relax
the high cpu count requirement, and to maximize pattern matching accuracy by incorporating a multi-staged pattern
matching flow prior to performing simulation on a reduced data set. Also brought forth will be an original methodology
for constructing the core pattern set, or candidate hotspot library, in conjunction with establishing hotspot and coldspot
pattern libraries. Lastly, it will be conveyed how this system can automatically improve its potential as more designs are
passed through it.
A litho hotspot repair hints requires the specifications of how layout edges should be modified. Identifying how layout
edges not directly touching the hotspot region is challenging to encode in a rule set. We propose an approach using
models called Partition Response Surface Models (pRSM) to estimate the contours changes due to design layout
modifications. In this paper we present details of litho hotspot repair hint engine which uses the pRSM models to
compute the shape changes amount to resolve a litho hotspot and which can accept constraints from both design
considerations or design rule considerations.
The advanced process technologies have well known yield loss due to the degradation of pattern fidelity. The
process to compensate for this problem is advanced resolution enhancement techniques (RET) and optical proximity
correction (OPC). By design, the creation of RET/OPC recipes and the calibration of process models are done very early
in the process development cycle with data that are not made of real designs since they are not yet available, but made of
test structures that represent different sizes, distances and topologies. The process of improving the RET/OPC recipes
and models is long and tedious, it is usually a key contributor to quick production ramp-up. It is very coverage limited
by design. The authors will present a proposed system that, by design, is dynamic, and allows the RET/OPC production
system to reach maturity faster through a detailed collection of hotspots identified at the design stage. The goal is to
reduce the lapse of time required to get mature production RET/OPC recipes and models.
A few years ago, model-based layout verification was used primarily with mask data preparation as a safety net to predict and avoid limited printability performance prior to mask fabrication. If certain layout locations would transfer poorly onto the wafer, the mask data was intercepted, preventing yield loss associated with "mask issues."
Such mask-related issues come primarily from three sources: Mask manufacture bias, OPC limitations and intrinsic layout configurations. While mask manufacture bias and OPC limitations can be addressed during the final stages of mask synthesis and manufacture, layout configurations that exhibit poor lithographic performance for a given process cannot be modified without considering the electrical effect such new topologies will induce in the modified layout.
In principle, marginally performing layouts can be removed from the design by adequately interpreting geometric design rules. Unfortunately, while such rules are strictly defined for 1D, they are not as well-defined for arbitrary 2D configurations. For that reason, several approaches to transferring sufficient process information to the layout synthesis tools to prevent the presence of layout configurations incompatible with the production process have been attempted. However, when the production process is not fully developed, using these approaches can potentially limit the portability of the layout.
In this paper, we describe and evaluate different approaches to defining reasonable layout verification targets by exploring various methods to reduce verification time, maintain accuracy and improve layout portability. First, to reduce verification time, we implement a method to quickly scan the layout for large variations without the need to run the actual OPC recipe. This paper describes the characteristics of a model that defines a pseudo-OPC process. Next, because the pseudo-OPC process cannot be mapped exactly to the real OPC process, there are accuracy limitations when using only the pseudo-OPC process. To overcome these limitations, the verification system follows an incremental approach, in which those regions previously selected are evaluated with the full mask synthesis recipe to reduce the number of falsely detected errors. Finally, to investigate the issue of portability, we evaluate how different errors evolve with maturing process and OPC recipe conditions for different layout patterns.
A more precise and accurate method of quantifying line end effects on binary photomasks becomes necessary as reticle features continue to decrease in size. A new methodology for measuring and evaluating line ends was developed. By performing multiple step-wise measurements across a single line end feature using a fixed-width region of interest, a simulated representation of the line end profile could be generated. A high n-order polynomial fit was then applied to the resultant data set and a minimum line end value was extrapolated. This methodology reduced the measurement error directly caused by the region-of-interest (ROI) placement and sizing while, at the same time, it improved the accuracy and precision of the measurement. The generated line end profiles may be further used for modeling, simulation, or characterization.
High-speed production of semiconductor devices demands in-line wafer metrology on a minimum number of sample points. If this data does not represent the average chip feature size, then an in-line monitor may indicate that a wafer is right on target. However, at end-of-line testing, the electrical parameters, incorporating all features within the chip, may be found shifted away from target. This paper presents a solution which increases wafer critical dimension targeting efficiency while notably relaxing the traditional mean-to-target reticle specification. By embedding ART (Average Representative Targeting) Structures into the reticle scribe, Reticle Engineering at LSI Logic leverage off the ability to adjust wafer exposure dose to compensate for off target reticle CDs. The novel targeting structure described in this paper assures average wafer CDs within 2.5 nm of target while effectively doubling the acceptable range for a standard mean to target reticle specification.