KEYWORDS: Point spread functions, Scattering, Scanning electron microscopy, Tolerancing, Critical dimension metrology, Calibration, Electron beam lithography, Electron beams
The result of electron beam lithography is influenced by many effects: forward and backward scattering, formation of secondary electrons, re-scattering of electrons, chemicals diffusion in the resist material, wafer stack, etc. To achieve high resolution all these effects should be taken into account. Commonly, the electron energy distribution in the exposed matter is described by the Point Spread Function (PSF). This is a simple approach which takes into account large portion of phenomena using few parameters. PSF function is a Gauss or multiple Gauss function, which is determined experimentally by the calibration procedure. Each resist material with corresponding stack is characterised by its own PSF, in case of double Gaussian, with the following parameters: α, β and η. In the current work the PSF parameters were systematically varied to study their influence on the dose assignment and resulting pattern. This gives a broader understanding of the correction mechanism using PSF. Furthermore, the resulting shape of the structure is influenced not only by the PSF parameters and dose assignment, but by the fracturing type as well. All these effects were studied using experimental and simulation approaches.
Designs for photonic devices on silicon relies on non-Manhattan features such as curves and a wide variety of angles. Reticle Enhancement Techniques (RET) that are commonly used for CMOS manufacturing now are applied to curvilinear data patterns for the same reasons of enhancing pattern fidelity. Common techniques for curvilinear data processing include Manhattanization, jog removal, and jog alignment. We propose a novel method of describing curvilinear shapes in terms of curves reconstructed between control points. Such representation of curvilinear shapes brings many benefits in terms of pattern description (improved fidelity, file compaction), correction and verification. For example, it allows smooth displacements during the design correction procedure for process effects. The conventional correction by biasing each fragment illustrates the curve-based biasing where only the control points have been moved and the corrected shape was then reconstructed by connecting the control points in their new positions by the new curves. This method results in faster computation because there are fewer locations to adjust geometry, easier convergence and intrinsic continuity between edges. It also affords significant reduction of the design file size. Besides processing curvilinear pattern data, verification is also required after any original pattern modifications. Mask Rule Checks (MRC) are considered as standard step in any design data preparation flows, but the conventional MRC algorithms are conceived for Manhattan designs and as such they often result in numerous false errors or even missing errors when applied to photonics or ILT (Inverse Lithography Technology) designs. In addition, MRC for photonic layouts require much more than basic width and space checking. We developed a verification technology compliant with curvilinear layouts. The new MRC technique is also based on curve representation of the original design comparing directly the curves instead of the straight fragments. It permits to have only one error flag per curve instead of multiple errors seen in fragment-by-fragment MRC.
Massively parallel mask-less electron beam lithography (MP-EBL) offers a large intrinsic flexibility at a low cost of ownership in comparison to conventional optical lithography tools. This attractive direct-write technique needs a dedicated data preparation flow to correct both electronic and resist processes. Moreover, Data Prep has to be completed in a short enough time to preserve the flexibility advantage of MP-EBL. While the MP-EBL tools have currently entered an advanced stage of development, this paper will focus on the data preparation side of the work for specifically the MAPPER Lithography FLX-1200 tool [1]-[4], using the ASELTA Nanographics Inscale software. The complete flow as well as the methodology used to achieve a full-field layout data preparation, within an acceptable cycle time, will be presented. Layout used for Data Prep evaluation was one of a 28 nm technology node Metal1 chip with a field size of 26x33mm2, compatible with typical stepper/scanner field sizes and wafer stepping plans. Proximity Effect Correction (PEC) was applied to the entire field, which was then exported as a single file to MAPPER Lithography’s machine format, containing fractured shapes and dose assignments. The Soft Edge beam to beam stitching method was employed in the specific overlap regions defined by the machine format as well. In addition to PEC, verification of the correction was included as part of the overall data preparation cycle time. This verification step was executed on the machine file format to ensure pattern fidelity and accuracy as late in the flow as possible. Verification over the full chip, involving billions of evaluation points, is performed both at nominal conditions and at Process Window corners in order to ensure proper exposure and process latitude. The complete MP-EBL data preparation flow was demonstrated for a 28 nm node Metal1 layout in 37 hours. The final verification step shows that the Edge Placement Error (EPE) is kept below 2.25 nm over an exposure dose variation of 8%.
A sensitivity analysis (SA) algorithm was developed and tested to comprehend the influences of different test pattern sets on the calibration of a point spread function (PSF) model with complementary approaches. Variance-based SA is the method of choice. It allows attributing the variance of the output of a model to the sum of variance of each input of the model and their correlated factors.1 The objective of this development is increasing the accuracy of the resolved PSF model in the complementary technique through the optimization of test pattern sets. Inscale® from Aselta Nanographics is used to prepare the various pattern sets and to check the consequences of development. Fraunhofer IPMS-CNT exposed the prepared data and observed those to visualize the link of sensitivities between the PSF parameters and the test pattern. First, the SA can assess the influence of test pattern sets for the determination of PSF parameters, such as which PSF parameter is affected on the employments of certain pattern. Secondly, throughout the evaluation, the SA enhances the precision of PSF through the optimization of test patterns. Finally, the developed algorithm is able to appraise what ranges of proximity effect correction is crucial on which portion of a real application pattern in the electron beam exposure.
With more and more photonic data presence in e-beam lithography, the need for efficient and accurate data fracturing is
required to meet acceptable manufacturing cycle time. Large photonic based layouts now create high shot count patterns
for VSB based tools. Multiple angles, sweeping curves, and non-orthogonal data create a challenge for today’s e-beam
tools that are more efficient on Manhattan style data. This paper describes techniques developed and used for creating
fractured data for VSB based pattern generators.
Proximity Effect Correction is also applied during the fracture process, taking into account variable shot sizes to apply
for accuracy and design style. Choosing different fracture routines for pattern data on-the-fly allows for fast and efficient
processing. Data interpretation is essential for processing curvilinear data as to its size, angle, and complexity. Fracturing
complex angled data into "efficient" shot counts is no longer practical as shot creation now requires knowledge of the
actual data content as seen in photonic based pattern data.
Simulation and physical printing results prove the implementations for accuracy and write times compared to traditional
VSB writing strategies on photonic data. Geometry tolerance is used as part of the fracturing algorithm for controlling
edge placement accuracy and tuning to different e-beam processing parameters.
A new correction technique has been developed not only to reduce the corner rounding, but also to restrain the
building up of shot counts that is able to increase the exposure time in electron beam (e-beam) lithography. It is able to
prove the developed corner rounding correction technique is useful with high accuracies throughout the simulation of
several different types of correction in the data preparation software, Inscale® from Aselta Nanographics, and its
comparisons with exposure images. The developed one is helpful to suppress the accumulation of shot counts either.
Furthermore, it shows the general limit of corner rounding correction in a conventional variable shaped beam exposure
tool with current resist process. Firstly, we are demonstrating the new method for correcting the corner rounding that
either can avoid the extension of exposure shot counts, called writing time. Secondly, this study reveals the current
bounds of corner rounding correction, especially the lithography employing the shaped beam tool. Finally, we propose
the criteria of data preparation for the corner rounding in e-beam lithography, specifically upcoming 18nm technology
node and practical applications.
Proximity Effects in electron beam lithography impact feature dimensions, pattern fidelity and uniformity. These effects
are addressed using a mathematical model representing the radial exposure intensity distribution induced by a point
electron source, commonly named as the Point Spread Function (PSF). PSF models are usually employed for predicting
and compensating for effects up to 15μm. It is well known that there are also some process related phenomena that
impact pattern uniformity that have a longer range, namely CMP effects, fogging, etc.
Performing proximity effects corrections can result in lengthy run times as file size and pattern densities continue to
increase exponentially per technology node. Running corrections for extreme long range phenomena becomes
computational and file size prohibitive. Nevertheless, since extreme long range may reach up several millimeters, and
new technology nodes require a high level of precision, a strategy for predicting and compensating these phenomena is
crucial.
In this paper a set of test patterns are presented in order to verify and calibrate the so called extreme long range effects in
the electron beam lithography. Moreover, a strategy to compensate for extreme long range effects based on the pattern
density is presented. Since the evaluation is based on a density map instead of the actual patterns, the computational
effort is feasible.
The proposed method may be performed off-line (in contrast to machine standard in-line correction). The advantage of
employing off-line compensation relies on enhancing the employ of dose and/or geometry modulation. This strategy also
has the advantage of being completely decoupled from other e-beam writer’s internal corrections (like Fogging Effect
Correction - FEC).
VSB mask writers, which create patterns using a combination of rectangles and 45 degree triangles, are ill-suited to non-
Manhattan geometries. This issue is particularly acute for layouts which contain a large fraction of curvilinear “offangle”
patterns such as photonic or DRAM designs. Unable to faithfully reproduce the “off-angle” structures, traditional
VSB mask writers approximate the desired design using abutted rectangular shots or small shapes to smooth out line
edge roughness. Fidelity to the original pattern comes at a cost of increased shot count and reduced throughput.
Aselta has developed a novel fracture algorithm to dramatically reduce the shot count of such designs. Using a traditional
VSB pattern generator, the new algorithm provides significant shot count reduction. When combined with a modified
JBX-3200MV VSB, the shot count is reduced while maintaining the same level of fidelity. The data preparation software
tool has also the capability of trading off a more accurate level of fidelity with an even more reduced shot count.
The paper will first describe the basic principles of the fracturing algorithm and e-beam writer hardware configuration
then demonstrate the advantage of the method on a variety of patterns.
Proximity Effects in electron beam lithography impact feature dimensions, pattern fidelity and uniformity. Electron
scattering effects are commonly addressed using a mathematical model representing the radial exposure intensity
distribution induced by a point electron source, commonly named Point Spread Function (PSF). PSF models are usually
employed for correcting “short-range” and “long-range” backscattering effects up to 10μm to 15μm. It is well known
that there are also some process related phenomena impacting pattern uniformity that have a wider range (fogging,
chemical mechanical polishing -CMP- effects, etc.) which impacts up to a few millimeters or more. There are a number
of commercial strategies for mitigating such long range effects based on data density. However, those traditional ones
are usually performed within a single chip on a reticle field and ignore the presence of adjacent fields, neglecting their
influence.
Full field reticles can contain several different designs or arrayed chips in a multitude of layout placements. Reticle level
jobdeck placing each design at specific sites, independent of each other can be used to account for the density of each
pattern that has a relative impact on its neighbors, even if they are several millimeters away from offending data.
Therefore, full field density analysis accounting for scribe frames and all neighboring patterns is required for reaching
fidelity control requirements such as critical dimension (CD) and line end shortening (LES) on the full plate.
This paper describes a technique to compensate long range effects going across chip boundaries to the full reticle
exposure field. The extreme long range effects are also represented with a model that is calibrated according to the
characteristics of the user‟s process. Data correction can be based on dose and geometry modulation. Uniform pattern
dimensional control matching the user's specific process long range variability can be achieved with the techniques
described in this paper.
The new generations of photomasks are seen to bring more and more challenges to the mask manufacturer. Maskshops
face two conflicting requirements, namely improving pattern fidelity and reducing or at least maintaining acceptable
writing time. These requirements are getting more and more challenging since pattern size continuously shrinks and data
volumes continuously grows.
Although the classical dose modulation Proximity Effect Correction is able to provide sufficient process control to the
mainstream products, an increased number of published and wafer data show that the mask process is becoming a nonnegligible
contributor to the 28nm technology yield. We will show in this paper that a novel approach of mask proximity
effect correction is able to meet the dual challenge of the new generation of masks.
Unlike the classical approach, the technique presented in this paper is based on a concurrent optimization of the dose and
geometry of the fractured shots. Adding one more parameter allows providing the best possible compromise between
accuracy and writing time since energy latitude can be taken into account as well. This solution is implemented in the
Inscale software package from Aselta Nanographics.
We have assessed the capability of this technology on several levels of a 28nm technology. On this set, the writing time
has been reduced up to 25% without sacrificing the accuracy which at the same time has been improved significantly
compared to the existing process. The experiments presented in the paper confirm that a versatile proximity effect
correction strategy, combining dose and geometry modulation helps the users to tradeoff between resolution/accuracy
and e-beam write time.
KEYWORDS: Optical alignment, Semiconducting wafers, Chemical mechanical planarization, Copper, Scanning probe microscopy, Front end of line, Back end of line, Overlay metrology, Metals, Silicon
In this paper, methods for stacking ASML scribe lane alignment marks (SPM) and improving the mark performance at initial copper metal levels are discussed. The new mark designs and the theoretical reasons for mark design and/or integration change are presented. In previous joint publications between ASML and Freescale Semiconductor [1], improved overlay performance and alignment robustness for Back End Of Line (BEOL) layers by the application of stacked scribe lane marks (SPM) was presented. In this paper, further improvements are demonstrated through the use of optimized Versatile Scribe Lane Mark design (VSPM). With the application of stacked optimized VSPM-marks, the alignment signal strength of marks in the copper metal layer is increased compared to stacked SPM marks. The gains in signal strength stability, which is typical for stacked marks, as well as significantly reduced scribe lane usage, are also maintained. Through the placement of specially designed orthogonal scatter-bars in selected layers under the VSPM-marks, the alignment performance of initial inlaid metal layers is improved as well. The integration of these marks has been evaluated for the 90 nm and 65 nm technology nodes as part of a joint development program between the Crolles2 Alliance and ASML. A measured overlay improvement of ~10-15% was obtained by a strategy change from floating copper marks to stacked optimized VSPM marks.
In recent years mask data preparation (MDP) has been complicated by a number of factors, including the introduction of resolution enhancement technologies such as optical proximity correction (OPC) and phase shift masks. These complications not only have led to significant increases in file sizes and computer runtimes, but they have also created an urgent need for data management tools -- MDP automation. Current practices rely on point solutions to specific problems, such as OPC; use outdated, proprietary, non-standard, informal or inefficient data formats; and just barely manage portions of the data flow via low-level scripting. Without automation, MDP requires human intervention, which leads to longer cycle times and more errors. Without adequate data interchange formats, automation cannot succeed. This paper examines MDP processes and data formats, and suggests opportunities for improvement. Within the context of existing data formats, we examine the effect of inadequate (e.g., proprietary) data formats on MDP flow. We also examine the closest thing to an open, formal, standard data format--GDSII--and suggest improvements and even a replacement based on the extensible markup language (XML).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.