KEYWORDS: Calibration, Data modeling, Process modeling, Autoregressive models, Lithography, Photomasks, Statistical modeling, Monte Carlo methods, Photoresist processing, Optical proximity correction
A two-stage approach is introduced to improve the accuracy of compact patterning models used in large-scale computational lithography. Each of the two stages uses a separate empirically calibrated regression model whose accuracy at predicting printed feature dimensions has been proven in the usual standalone (single stage) mode. For the first model stage, we choose an established regularized regression model of the kind that accounts for resist non-idealities by suitably modifying the pre-thresholded exposing dose pattern, with the model basis functions taking the form of modified convolutions of adjustable kernels with the optical image. A different class of regression model is used in the second stage, namely a model that accounts for resist non-idealities by making a pattern-dependent local adjustment in the develop threshold, with the model basis functions being characteristic traits of the image trace along feature cutlines. However, rather than applying this second model in the usual mode where it adjusts the develop threshold applied to the exposing optical image, we use it to adjust a threshold that is applied to the improved effective dose distribution provided by the first-stage model. The effectiveness of the proposed method is verified by modeling pattern transfer of critical layers in 14- and 22-nm complementary metal–oxide–semiconductor (CMOS) technology. In our experience, little accuracy improvement is gained by expanding the complexity of standard single-stage models beyond the level of empirically proven model forms. However, even in a basic implementation, inclusion of a second stage of modeling will in itself reduce RMS error by ∼45 % in our 14-nm example. Moreover, accuracy improvement is further boosted to ∼55 % by adopting a minimax strategy in which the model is conservatively regularized according to the worst-case outcome in cross-validation tests but is calibrated according to the best-case outcome.
Process models have been in use for performing proximity corrections to designs for placement on lithography masks for
a number of years. In order for these models to be used they must provide an adequate representation of the process
while also allowing the corrections themselves to be performed in a reasonable computational time. In what is becoming
standard Optical Proximity Correction (OPC), the models used have a largely physical optical model combined with a
largely empirical resist model. Normally, wafer data is collected and fit to a model form that is found to be suitable
through experience. Certain process variables are considered carefully in the calibration process-such as exposure dose
and defocus - while other variables-such film thickness and optical parameter variations are often not considered. As
the semiconductor industry continues to march toward smaller and smaller dimensions-with smaller tolerance to errorwe
must consider the importance of those process variations. In the present work we describe the results of experiments
performed in simulations to examine the importance of many of those process variables which are often regarded as
fixed. We show examples of the relative importance of the different variables.
Depth of focus has always been of great concern in lightography as a result of processes optimized for specific feature dimensions and pitches. Insertion of sub-resolution assist features (SRAFs) or scatter bars ois a common technique used to equalize DOF through the variety of geometries used in a design. SRAFs can be inserted into the layout by a variety of means ranging from methods based on simple rule-tables to full-fledged layout simulations. Various tools are available from electronic design automation (EDA) vendors that are capable of placing srafs based on elaborate simulations of the design layout, but, a tool that can determine rule-table is not available. Each resolution enhancement technique (RET) engineer has his/her own methodology of extracting rules based on simulation of a large layout with design and sraf rule variations. Significant computational resources are required to carry-out these extensive simulations affecting the time required to formulate the rule table and restricting the variation that can be considered for the simulations. In this paper, we discuss an efficient method which overcomes this problem by searching in the design and dsraf rule domain to obtain a comprehensive set of sraf rules, thereby resulting in a better rule-set by using significantly lesser computational resources.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.