Designs which are output by OPC (Optical Proximity Correction) tools contain a large number of jog edges. Jogs are small edges introduced by OPC tools to create segments in an input design edge to provide freedom to the individual segments to move independently. Such segmentation is important to achieve correct, uniform results across the critical dimensions of a feature. Traditionally, Mask Process Correction (MPC) tools which work on OPC output, choose to not move these jog edges (a.k.a. jog freeze). The main reason for doing so is that the jog edges are so small that moving them does not significantly improve the mask quality. However, for newer design nodes, increasing OPC complexity results in primary segments similar in size to jog edge size. Hence, freezing the jogs may not be a viable option as it may mean that a significant portion of design edges are frozen. In this paper, we propose methods for movement of the jog edges and the impact it has on the overall mask quality. Shot count of the mask data post-fracture is an important Quality of Results (QoR) metric for Vector Shaped Beam (VSB) mask writer tools. One of the main advantages that comes from the flexibility of moving jog edges is to improve the mask data shot count. This paper will discuss the shot count improvement method within the MPC tool and show the impact it has on the other quality metrics.
Data volume and average data preparation time continue to trend upward with newer technology nodes. In the past decade, with file sizes measured in terabytes and network bandwidth requirements exceeding 40GB/s, mask synthesis operations have expanded their cluster capacity to thousands and even 10s of thousands of CPU cores. Efficient, scalable and flexible management of this expensive, high performance, distributed computing system is required in every stage of geometry processing - from layout polishing through Optical Proximity Correction (OPC), Mask Process Correction (MPC) and Mask Data Preparation (MDP) - to consistently meet tape out cycle time goals. The MDP step, being the final stage in the entire flow, has to write all of the pattern data into one or more disk files. This extremely I/O intensive section remains a significant portion of the processing time and creates a major challenge for the software from a scalability perspective. It is important to have a comprehensive solution that displays high scalability for large jobs and low overhead for small jobs, which is the ideal behavior in a typical production environment. In this paper we will discuss methods to address the former requirement, emphasizing the efficient use of high performance distributed file systems while minimizing the less scalable disk I/O operations. We will also discuss dynamic resource management and efficient job scheduling to address the latter requirement. Finally, we will demonstrate the use of a cluster management system to create a comprehensive data processing environment suitable to support large scale data processing requirements.
This study quantifies the impact of systematic mask errors on OPC model accuracy and proposes a methodology to reconcile the largest errors via calibration to the mask error signature in wafer data. First, we examine through simulation, the impact of uncertainties in the representation of photomask properties including CD bias, corner rounding, refractive index, thickness, and sidewall angle. The factors that are most critical to be accurately represented in the model are cataloged. CD bias values are based on state of the art mask manufacturing data while other variable values are speculated, highlighting the need for improved metrology and communication between mask and OPC model experts. It is shown that the wafer simulations are highly dependent upon the 1D/2D representation of the mask, in addition to the mask sidewall for 3D mask models. In addition, this paper demonstrates substantial accuracy improvements in the 3D mask model using physical perturbations of the input mask geometry when using Domain Decomposition Method (DDM) techniques. Results from four test cases demonstrate that small, direct modifications in the input mask stack slope and edge location can result in model calibration and verification accuracy benefit of up to 30%. We highlight the benefits of a more accurate description of the 3D EMF near field with crosstalk in model calibration and impact as a function of mask dimensions. The result is a useful technique to align DDM mask model accuracy with physical mask dimensions and scattering via model calibration.
Computational lithography solutions rely upon accurate process models to faithfully represent the imaging system output for a defined set of process and design inputs. These models rely upon the accurate representation of multiple parameters associated with the scanner and the photomask. Many input variables for simulation are based upon designed or recipe-requested values or independent measurements. It is known, however, that certain measurement methodologies, while precise, can have significant inaccuracies. Additionally, there are known errors associated with the representation of certain system parameters. With shrinking total critical dimension (CD) control budgets, appropriate accounting for all sources of error becomes more important, and the cumulative consequence of input errors to the computational lithography model can become significant. In this work, we examine via simulation the impact of errors in the representation of photomask properties including CD bias, corner rounding, refractive index, thickness, and sidewall angle. The factors that are most critical to be accurately represented in the model are cataloged. CD bias values are based on state-of-the-art mask manufacturing data, and other variable changes are speculated, highlighting the need for improved metrology and communication between mask and optical proxmity correction model experts. The simulations are done by ignoring the wafer photoresist model and show the sensitivity of predictions to various model inputs associated with the mask. It is shown that the wafer simulations are very dependent upon the one-dimensional/two-dimensional representation of the mask, and for three-dimensional, the mask sidewall angle is a very sensitive factor influencing simulated wafer CD results.
Computational lithography solutions rely upon accurate process models to faithfully represent the imaging system output for a defined set of process and design inputs. These models rely upon the accurate representation of multiple parameters associated with the scanner and the photomask. Many input variables
for simulation are based upon designed or recipe-requested values or independent measurements. It is known, however, that certain measurement methodologies, while precise, can have significant inaccuracies. Additionally, there are known errors associated with the representation of certain system parameters. With shrinking total CD control budgets, appropriate accounting for all sources of error becomes more important, and the cumulative consequence of input errors to the computational lithography model can
become significant. In this work, we examine via simulation, the impact of errors in the representation of
photomask properties including CD bias, corner rounding, refractive index, thickness, and sidewall angle. The factors that are most critical to be accurately represented in the model are cataloged. CD bias values are based on state of the art mask manufacturing data and other variables changes are speculated, highlighting the need for improved metrology and communication between mask and OPC model experts. The simulations are done by ignoring the wafer photoresist model, and show the sensitivity of predictions to various model inputs associated with the mask. It is shown that the wafer simulations are very dependent upon the 1D/2D representation of the mask and for 3D, that the mask sidewall angle is a very sensitive factor influencing simulated wafer CD results.
Computational lithography solutions rely upon accurate process models to faithfully represent the imaging system output for a defined set of process and design inputs. These models, which must balance accuracy demands with simulation runtime boundary conditions, rely upon the accurate representation of multiple parameters associated with the scanner and the photomask. While certain system input variables, such as scanner numerical aperture, can be empirically tuned to wafer CD data over a small range around the presumed set point, it can be dangerous to do so since CD errors can alias across multiple input variables. Therefore, many input variables for simulation are based upon designed or recipe-requested values or independent measurements. It is known, however, that certain measurement methodologies, while precise, can have significant inaccuracies. Additionally, there are known errors associated with the representation of certain system parameters. With shrinking total CD control budgets, appropriate accounting for all sources of error becomes more important, and the cumulative consequence of input errors to the computational lithography model can become significant. In this work, we examine with a simulation sensitivity study, the impact of errors in the representation of photomask properties including CD bias, corner rounding, refractive index, thickness, and sidewall angle. The factors that are most critical to be accurately represented in the model are cataloged. CD Bias values are based on state of the art mask manufacturing data and other variables changes are speculated, highlighting the need for improved metrology and awareness.
It is anticipated that throughout the process development phase for the introduction of EUV lithography, defect free
substrates won’t be available – even at the manufacturing stage, non-repairable defects may still be present. We
investigate EDA-based approaches for defect avoidance, such as reticle floor planning, shifting the entire reticle field
(pattern shift), pattern shift in addition to layout classification (smart shift), and defect repair in the data prior to mask
write. This investigation is followed by an assessment of the complexity and impact on the mask manufacturing process
of the various approaches. We then explore the results of experiments run using a software solution developed on the
Calibre platform for EUV defect avoidance on various mask blanks, analyzing its effectiveness and performance.
The extension of 193nm exposure wavelength to smaller nodes continues the trend of increased data complexity and
subsequently longer mask writing times. In particular inverse lithography methods create complex mask shapes. We
introduce a variety of techniques to mitigate the impact - data simplification post-optical proximity correction (OPC), L-Shots,
multi-resolution writing (MRW) and optimization based fracture. Their potential for shot count reduction is
assessed. All of these techniques require changes to the mask making work flow at some level - the data preparation and
verification flow, the mask writing equipment, the mask inspection and the mask qualification in the wafer
manufacturing line. The paper will discuss these factors and conduct a benefit - effort assessment for the deployment.
Some of the techniques do not reproduce the originally targeted mask shape. The impact of the deviations will be studied
at wafer level with simulations of the exposure process and quantified as to their impact on the exposure process
window. Based on the results of the assessment a deployment strategy will be discussed.
The extension of 193nm exposure wavelength to smaller nodes continues the trend of increased data complexity and
subsequently longer mask writing times. We review the data preparation steps post tapeout, how they influence shot
count as the main driver for mask writing time and techniques to reduce that impact. The paper discusses the application
of resolution enhancements and layout simplification techniques; the fracture step and optimization methods; mask
writing and novel ideas for shot count reduction.
The paper will describe and compare the following techniques: optimized fracture, pre-fracture jog alignment,
generalization of shot definition (L-shot), multi-resolution writing, optimized-based fracture, and optimized OPC output.
The comparison of shot count reduction techniques will consider the impact of changes to the current state of the art
using the following criteria: computational effort, CD control on the mask, mask rule compliance for manufacturing and
inspection, and the software and hardware changes required to achieve the mask write time reduction. The paper will
introduce the concepts and present some data preparation results based on process correction and fracturing tools.
The increasing complexity of RET solutions with each new process node has increased the shot count of advanced
photomasks. In particular, the introduction of inverse lithography masks represents a significant increase in mask
complexity. Although shot count reduction can be achieved through careful management of the upstream OPC
strategy and improvement of fracture algorithms, it is also important to consider more dramatic departures from
traditional fracture techniques. Optimization based fracture allows for overlapping shots to be placed in a manner that
allows the mask intent to be realized while achieving significant savings in shot count relative to traditional fracture
based methods. We investigate the application of Optimization based fracture to reduce the shot count of inverse
lithography masks, provide an assessment of the potential shot count savings, and assess its impact on lithography
process window performance.
The OASIS working group was first initiated in 2001, published the new format in March 2004, which was
ratified as an official SEMI standard in September 2005. A follow-on initiative expanded the new standard
to cover the needs of the mask manufacturing equipment sector with a derived standard called
OASIS.MASK (P44) that was released in November 2005 and updated in May 2008. While there are many
potential benefits from this improved format over the incumbent GDSII and MEBES standards, the main
driver for the development of the OASIS format was the looming data volume explosion from the onward
march of processing and design technology. With a demonstrated benefit of roughly 10x over the GDSII
format, it was expected that the new OASIS format would be embraced quickly by the semiconductor
industry. In reality, the adoption process took significantly longer and is still in progress. The paper
analyzes the data volume and adoption trends by manufacturing steps - e.g design, post tapeout flow and
mask manufacturing. Survey results on the adoption status are shared and an analysis of the technical,
economic, and environmental factors influencing it will be discussed.
With each new process technology node, chip designs increase in complexity and size, leading to a steady
increase in data volumes. As a result, mask data prep flows require more computing resources to maintain
the desired turn-around time (TAT) at a low cost. The effect is aggravated by the fact that a mask house
operates a variety of equipment for mask writing, inspection and metrology - all of which, until now,
require specific data formatting. An industry initiative sponsored by SEMI® has established new public
formats - OASIS® (P39) for general layouts and OASIS.MASK (P44) for mask manufacturing equipment -
that allow for the smallest possible representation of data for various applications. This paper will review a
mask data preparation process for mask inspection based on the OASIS formats that also reads
OASIS.MASK files directly in real time into the inspection tool. An implementation based on standard
parallelized computer hardware will be described and characterized as demonstrating throughputs required
for the 45nm and 32nm technology nodes. An inspection test case will also be reviewed.
KEYWORDS: Data storage, Clocks, Data processing, Photomasks, Testing and analysis, Data modeling, Iterative methods, Visualization, Distributed computing, Information technology
The data volume is increasing exponentially in mask data preparation (MDP) flows for sub-45nm technologies, but
time to market drives the acceptable total turnaround time. As a reasonable response, more computing resources are
purchased to address these two issues. How to effectively use these resources including the latest CPUs, high-speed
networking, and the fastest data storage devices is becoming an urgent problem to solve. A detailed study is conducted in
an attempt to find an optimal solution to this problem. In particular, how CPU speed, bandwidth of network connections,
and I/O speed of data storage devices affect the total turnaround time (TAT) in a mask data preparation flow is
researched. For a given High Performance Computing (HPC) budget and MDP flow TAT constraints, methodologies to
optimize HPC resources are proposed.
The extension of optical lithography at 193nm wavelength to the 32nm node and beyond drives advanced resolution
enhancement techniques that impose even tighter tolerance requirements on wafer lithography and etch as well as on
mask manufacturing. The presence of residual errors in photomasks and the limitations of capturing those in process
models for the wafer lithography have triggered development work for separately describing and correcting mask
manufacturing effects. Long range effects - uniformity and pattern loading driven - and short range effects - proximity
and linearity - contribute to the observed signatures. The dominating source of the short range errors is the etch process
and hence it was captured with a variable etch bias model in the past [1]. The paper will discuss limitations and possible
extensions to the approach for improved accuracy. The insertion of mask process correction into a post tapeout flow
imposes strict requirements for runtime and data integrity. The paper describes a comprehensive approach for mask
process correction including calibration and model building, model verification, mask data correction and mask data
verification. Experimental data on runtime performance is presented.
Flow scenarios as well as other applications of mask process correction for gaining operational efficiency in both tapeout
and mask manufacturing are discussed.
With each new process technology node chip designs increase in complexity and size, and mask data prep flows require
more compute resources to maintain the desired turn around time (TAT) at a low cost. Securing highly scalable
processing for each element of the flow - geometry processing, resolution enhancements and optical process correction,
verification and fracture - has been the focal point so far. The utilization for different flow elements depends on the
operation, the data hierarchy and the device type. This paper introduces a dynamic utilization driven compute resource
control system applied to large scale parallel computation environment. The paper will analyze performance metrics
TAT and throughput for a production system and discuss trade-offs of different parallelization approaches in data
processing regarding interaction with dynamic resource control. The study focuses on 65nm and 45nm designs.
As tolerance requirements for the lithography process continue to shrink with each new technology node, the
contributions of all process sequence steps to the critical dimension error budgets are being closely examined, including
wafer exposure, resist processing, pattern etch, as well as the photomask process employed during the wafer exposure.
Along with efforts to improve the mask manufacturing processes, the elimination of residual mask errors via pattern
correction has gained renewed attention. The portfolio of correction tools for mask process effects is derived from well
established techniques commonly used in optical proximity correction and in electron beam proximity effect
compensation. The process component that is not well captured in the correction methods deployed in mask
manufacturing today is etch. A mask process model to describe the process behavior and to capture the physical effects
leading to deviation of the critical dimension from the target value represents the key component of model-based
correction and verification. This paper presents the flow for generating mask process models that describe both shortrange
and long-range mask process effects, including proximity loading effects from etching, pattern density loading
effects, and across-mask process non-uniformity. The flow is illustrated with measurement data from real test masks.
Application of models for both mask process correction and verification is discussed.
As tolerance requirements for the lithography process continue to shrink, the complexity of the optical proximity
correction is growing. Smaller correction grids, smaller fragment lengths and the introduction of pixel-based simulation
lead to highly fragmented data fueling the trend of larger file sizes as well as increasing the writing times of the vector
shaped beam systems commonly used for making advanced photomasks. This paper will introduce an approach of
layout modifications to simplify the data considering both fracturing and mask writing constraints in order to make it
more suitable for these processes. The trade-offs between these simplifications and OPC accuracy will be investigated.
A data processing methodology that allows preserving the OPC accuracy and modifications all the way to the mask
manufacturing will also be described. This study focuses on 65nm and 45nm designs.
In order to fully exploit the design knowledge during the operation of mask manufacturing equipment, as well as to
enable the efficient feedback of manufacturing information upstream into the design chain, close communication links
between the data processing domain and the machine are necessary.
With shrinking design rules and modeling technology required to drive simulations and corrections, the amount and
variety of measurements, for example, is steadily growing. This requires a flexible and automated setup of parameters
and location information and their communication with the machine.
The paper will describe a programming interface based on the Tcl/Tk language that contains a set of frequently
reoccurring functions for data extraction and search, site characterization, site filtering, and coordinate transfer. It
enables the free programming of the links, adapting to the flow and the machine needs. The interface lowers the effort
to connect to new tools with specific measurement capabilities, and it reduces the setup and measurement time. The
interface is capable of handling all common mask writer formats and their jobdecks, as well as OASIS and GDSII data.
The application of this interface is demonstrated for the Carl Zeiss AIMSTM system.
Optical proximity correction (OPC) is widely used in wafer lithography to produce a printed image that best matches the
design intent while optimizing CD control. OPC software applies corrections to the mask pattern data, but in general it
does not compensate for the mask writer and mask process characteristics. The Sigma7500-II deep-UV laser mask writer
projects the image of a programmable spatial light modulator (SLM) using partially coherent optics similar to wafer
steppers, and the optical proximity effects of the mask writer are in principle correctable with established OPC methods.
To enhance mask patterning, an embedded OPC function, LinearityEqualizeTM, has been developed for the Sigma7500-
II that is transparent to the user and which does not degrade mask throughput. It employs a CalibreTM rule-based OPC
engine from Mentor Graphics, selected for the computational speed necessary for mask run-time execution. A multinode
cluster computer applies optimized table-based CD corrections to polygonized pattern data that is then fractured
into an internal writer format for subsequent data processing. This embedded proximity correction flattens the linearity
behavior for all linewidths and pitches, which targets to improve the CD uniformity on production photomasks.
Printing results show that the CD linearity is reduced to below 5 nm for linewidths down to 200 nm, both for clear and
dark and for isolated and dense features, and that sub-resolution assist features (SRAF) are reliably printed down to 120
nm. This reduction of proximity effects for main mask features and the extension of the practical resolution for SRAFs
expands the application space of DUV laser mask writing.
Optical proximity correction (OPC) is widely used in wafer lithography to produce a printed image that best matches the
design intent while optimizing CD control. OPC software applies corrections to the mask pattern data, but in general it
does not directly compensate for the mask writer and mask process characteristics. The Sigma7500 deep-ultraviolet
(DUV) mask writer projects the image of a programmable spatial light modulator (SLM) onto the mask using partially
coherent optics similar to wafer steppers, and the residual optical proximity effects of the mask writer are in principle
correctable with established OPC methods.
To enhance mask patterning, an embedded OPC function called LinearityEqualizerTM has been developed for the
Sigma7500 that is transparent to the user and which does not degrade mask throughput. It employs the Mentor Graphics
Calibre OPC engine, selected for the computational speed necessary for mask run-time execution. A multi-node cluster
computer applies optimized table-based CD corrections to polygonized pattern data, which is then refractured into a
standard writer format for subsequent data processing. This short-range proximity correction works in conjunction with
ProcessEqualizerTM, a previously developed print-time function that reduces long-range process-related CD errors. OPC
flattens the linearity behavior for all linewidths and pitches, which should improve the total CD uniformity on
production photomasks. Along with better resolution of assist features, this further extends the application space of DUV
mask writing. Testing shows up to a 4x reduction in the range of systematic CD deviations for a broad array of feature
sizes and pitches, and dark assist features are reliably printed down to 120 nm at mask scale.
Data Preparation for photomask manufacturing is characterized by computational complexity that grows faster than the
evolution of computer processor ability. Parallel processing generally addresses this problem and is an accepted
mechanism for preparing mask data. One judges a parallel software implementation by total time, stability and
predictability of computation. We apply several fundamental techniques to dramatically improve these metrics for a
parallel, distributed MDP system. This permits the rapid, predictable computation of the largest mask layouts on
conventional computing clusters.
The continuous drive of the semiconductor industry towards smaller features sizes requires mask manufacturers to achieve ever tighter tolerances for the most critical dimensions on the mask. CD uniformity requires particularly tight control. Equipment manufacturers and process engineers target their development to support these requirements. But as numerous publications indicate, more sophisticated data correction methods are still employed to compensate for shortcomings in equipment and process or to account for the boundary conditions in some layouts that contribute to process deviations. Among the corrected effects are proximity and linearity effects, fogging and etch effects, and pattern fidelity. Different designs vary by pattern size distribution as well as by pattern density distribution. As the implementation of corrections for optical proximity effects in wafer lithography has shown, breaking up the original polygons in the design layout for selective and environment-aware correction yields increased data volumes and can have an impact on the data quality of the mask writing data.
The paper investigates the effect of various correction algorithms specifically deployed for mask process effects on top of wafer process related corrections. The impact of MPC flows such as rule-based linearity and proximity correction and density-based long range effect correction on the metrics for data preparation and mask making is analyzed. Experimental data on file size, shot count and data quality indicators including small figure counts are presented for different correction approaches and a variety of correction parameters.
The diversification of mask making equipment in modern mask manufacturing has led to a large variety of different mask writing and inspection formats. Dispositioning the equipment and managing the data flow has turned into a challenging task. The data volumes of individual files used in the manufacture of modern integrated circuits have become unmanageable using established data format specifications. Several trends explain this: size, content and complexity of the designs are growing; the application of RET increases the vertex counts; complex data preparation flows post tape-out result in a large number of intermediate representations of the data. In addition assembly steps are introduced prior to mask making for leveling critical parameters. Despite the continuous effort to improve the performance of the individual tools that handle the data, is has become apparent that enhancements to the entire flow are necessary to gain efficiency. One concept suggested is the unification of the mask data representation: establishing a common format that can be accepted by all tools. This facilitates a streamlining of data prep flows to eliminate processing overhead and repeated execution of similar functions. OASIS, the new stream format developed under the sponsorship of SEMI, has the necessary features to full-fill the role of a common format in mask manufacturing. The paper describes the implementation of OASIS as a common intermediate format in the mask data preparation flow as well as its usage with additional restrictions as a common Variable-Shaped-Beam mask writer format. The benefits are illustrated with experimental results. Different implementation scenarios are discussed.
Illumination optimization has always been an important part of the process characterization and setup for new technology nodes. As we move to the 130nm node and beyond, this phase becomes even more critical due to the limited amount of available process window and the application of advanced model based optical proximity corrections (OPC). Illumination optimization has some obvious benefits in that it maximizes process latitude and therefore makes a process more robust to dose and focus variations that naturally occur during the manufacturing process. By mitigating the effect of process excursions, there are fewer numbers of reworks, faster cycle times and ultimately higher yield. Although these are the typical benefits associated with illumination optimization, there are also other potential benefits realized from an OPC modeling and mask data preparation (MDP) perspective as well. This paper will look into the not so obvious effects illumination optimization has on OPC and MDP. A fundamental process model built with suboptimal optical settings is compared against a model based on the optimal optical conditions. The optimal optical conditions will be determined based on simulations of the process window for several structures in a design using a metric of maximum common depth of focus (DOF) for a given minimum exposure latitude (EL). The amount of OPC correction will be quantified for both models and a comparison of OPC aggressiveness will be made. OPC runtimes will also be compared as well as output file size, amount of fragmentation, and the number of shot counts required in the mask making process. In conclusion, a summary is provided highlighting where OPC and MDP can benefit from proper illumination optimization.
The drive of the semiconductor industry towards smaller and smaller features sizes requires more sophisticated correction methods to guarantee the final tolerances for the etched features in both wafer manufacturing and mask making. The wavelength gap in lithography and process effects as well as dependencies on the design content have led to the tremendous variety of resolution enhancement techniques and process correction approaches that are currently applied to a design on its path to manufacturing. As the 65nm nodes become production ready and the 45nm node shifts into the focus of the development effects like flare in wafer exposure, fogging effects in ebeam mask exposure and others that previously could be ignored are becoming significant so that their correction prior to manufacturing is required. That means additional correction steps are necessary to complete the data preparation. These put a larger burden on the data processing path and raise concerns over data volume and processing time limitations. Hierarchical processing methods have proven very effective in the past to keep data volumes and processing time in control.
The paper explores the design trends and the potential of hierarchical processing under the new circumstances. Extended data flows with a variety of correction steps are investigated. Experimental results that demonstrate the benefit of hierarchical methods in conjunction with parallel processing methods like multithreading and distributed processing are provided. The benefit of introducing more effective data formats like OASIS in these flows will be illustrated.
The era of week-long turn around times (TAT) and half-terabyte databases is at hand as seen by the initial 90 nm production nodes. A quadrupling of TAT and database volumes for the subsequent nodes is considered to be a conservative estimate of the expected growth by most mask data preparation (MDP) groups, so how will fabs and mask manufacturers address this data explosion with a minimal impact to cost? The solution is a multi-tiered approach of hardware and software. By shifting from costly Unix servers to cheaper Linux clusters, MDP departments can add hundreds to thousands of CPU’s at a fraction of the cost. This hardware change will require the corresponding shift from multithreaded (MT) to distributed-processing tools or even a heterogeneous configuration of both. Can the EDA market develop the distributed-processing tools to support the era of data explosion? This paper will review the progression and performance (run time and scalability) of the distributed-processing MDP tools (DRC, OPC, fracture) along with the impact to the hierarchy preservation. It will consider the advantages of heterogeneous processing over homogenous. In addition, it will provide insight to potential non-scalable overhead components that could eventually exist in a distributed configuration. Lastly, it will demonstrate the cost of ownership aspect of the Unix and Linux platforms with respect to targeting TAT.
Mask making yield is seriously affected by un-repairable mask defects. Up to now, there is only one size specification for critical defects, which has to be applied to any defect found. Since recently, some mask inspection tools offer the capability to inspect different features on one mask with different sensitivity. Boolean operations can be used to segregate mask features into more and less critical. In this paper we show the MEEF (Mask Error Enhancement Factor), which determines from the mask / wafer pattern transfer the actual effectiveness of mask errors, as an objective and relatively easily determinable parameter to assess the printability of mask defects. Performing OPC, a model-based OPC tool is aware of the MEEF, and can also provide the capability for the additional information handling, which is needed to supply the mask maker with a set of data layers of different defect printability for one mask layer.
The data volumes of individual files used in the manufacture of modern integrated circuits have become unmanageable using current data format specifications. A number of factors contribute to the problem: size, content and complexity of the designs are growing; the application of RET increases the vertex counts; complex data preparation flows post tape-out result in a large number of intermediate representations of the data and assembly steps are introduced for leveling critical parameters. Based on the choices for the mask making equipment the final result of the flow - the mask writer data - varies. While there is a continuous effort to improve the individual performance of the tools that handle the data, is has become apparent that enhancements to the entire flow are necessary to gain efficiency. Two ways are explored in the present study - the elimination of processing overhead and repeated execution of similar functions and the simplification of the data flow by reducing the number of formats involved. OASIS, the new stream format developed under the sponsorship of SEMI, has the necessary features to fullfill this role. The paper will describe the concept of OASIS as a common intermediate format in the mask data preparation flow and illustrate the benefits with experimental results. A concept for a common mask writer format based on OASIS will be proposed. It considers format dependencies for the mask writing performance for different type of mask writing equipment. Different implementation scenarios are discussed.
An agile mask data preparation (MDP) approach is proposed to cut re-fracture cycle time as incurred by mask writer dispatching policy changes. Shorter re-fracture cycle time increases the flexibility of mask writer dispatching, as a result, mask writer's capacity can be utilized to its optimum. Preliminary results demonstrate promising benefits in MDP cycle time reduction and writer dispatching flexibility improvement. The agile MDP can save up to 40% of re-fracture cycle time. OASIS (Open Artwork System Interchange Standard) was proposed to address the GDSII file size explosion problem. However, OASIS has yet to gain wide acceptance in the mask industry. The authors envision OASIS adoption by the mask industry as a three-phase process and identify key issues of each phase from the mask manufacturer's perspective. As a long-term MDP flow reengineering project, an agile MDP and writer dispatching approach based on OASIS is proposed. The paper describes the results of an extensive evaluation on OASIS performance compared to that of GDSII, both original GDSII and post-OPC GDSII files. The file size of eighty percent of the original GDSII files is more than ten times larger compared to that of its OASIS counterpart.
The data volumes of individual files used in the manufacture of modern integrated circuits have become unmanageable using existing data formats specifications. The ITRS roadmap indicates that single layer MEBES files in 2004 exceed 200 GB threshold, worst case. OASIS, the new stream format developed under the sponsorship of SEMI, has been approved in the industry-wide voting in June 2003. The new format that on average will reduce the file size by an order of magnitude, enables to streamline data flows and provides increased efficiency in data exchange. The work to implement the new format into software tools is in progress. This paper gives an overview on the new format, reports results on data volume reduction and is a report on the status and benefits the new format can deliver. A data flow relying on OASIS as the input and transfer format is discussed.
As the industry is targeting the sub-100nm nodes the pressure on the data path and tapeout flow is growing. Design complexity and increased deployment of resolution enhancement techniques (RET) result in rapidly growing file sizes, which impacts the data preparation time, especially for variable-shaped beam mask writing machines. Properties of the incoming layout - hierarchy, grid and aggressiveness of the RET are the main factors. Tuning the OPC without compromising the lithographic performance while achieving the shortest processing time is the target. The study investigates the impact of OPC setup with a focus on using the functional intent of the design to influence the aggressiveness of the correction. The correlation to the performance parameters for the mask data preparation and mask writing, like run-time, file size, shot count and small figures, will be reported.
The ever-increasing complexity of integrated circuits and their enabling process technology has accelerated the increase in data volume of post-RET data which is input to the photomask manufacturing industry.
OASIS - the new stream format that has been developed by a working group under the sponsorship of the SEMI Data Path Task Force enables the representation of IC layout data in a much more compact form than GDSII and facilitates the incorporation of hierarchical data into the mask-making infrastructure. OASIS achieves on average a >10x reduction in file size compared to GDSII files and structures the data in a way, which allows a straightforward translation from a hierarchical format to the required flat mask perspective. Owing to the efficiency in representing the data, OASIS files are smaller than commonly used flat exchange formats - like MEBES, thus enabling an efficient hierarchical data flow both from the processing as well as the file handling prospective.
The implementation of OASIS into post-tapeout data flows will be discussed and experimental results on OASIS-based data preparation flows will be shown.
The continuous integration trend in design and broad deployment of resolution enhancement techniques (RET) have a tremendous impact on circuit file size and pattern complexity. Increasing design cycle time has drawn attention to the data manipulation steps that follow the physical layout of the design. The contributions to the total turn-around time for a design are twofold: the time to get the data ready for the hand-off to the mask writer is growing, but also the time it takes to write the mask is heavily influenced by the size and complexity of the data. In order to reduce the time that is required for the application of RET and the export of the data to mask writer formats, massively parallel processing approaches have been described. This paper presents such computing algorithms for the hierarchical implementation of RET and mask data preparation (MDP). We focus on the parallel and flexible deployment of a new hybrid multithreaded and distributed processing scheme in homogeneous and heterogeneous computer networks called MTFlex. We describe the new methodology and discuss corresponding hardware and software configurations. The application of this “MTFlex” computing scheme to different tasks in post-tapeout data preparation is shown in examples.
As the industry is targeting the sub-100nm nodes, the pressure on the data path and tapeout flow is growing. Design complexity and increased deployment of resolution enhancement techniques (RET) result in rapidly growing file sizes, which turns what used to be the relatively simple task of mask data preparation into a real bottleneck. Previous work indicated that the properties of the incoming layout - hierarchy, grid, aggressiveness of the RET solution have a major impact on the performance of fracturing and mask writing The tuning of OPC can reduce it's impact on subsequent fracturing. This study investigates the impact of OPC setup- various fragmentation parameters like fragment length, density, interaction radius and, various line-end correction schemes on the performance parameters for the mask data preparation and mask writing. Parameters like run-time, file size, shot count and small figures will be reported in dependence on the input layout preparation history. Approaches to optimize the integrated RET and MDP flow while maintaining manufacturability and the required CD control are introduced and evaluated.
Mask manufacturing for the 100 and 65nm nodes is accompanied by an increasing deployment of VSB mask writing machines. The continuous integration trend in design and broad deployment of RET have a tremendous impact on file size and pattern complexity. The impact on the total turn-around time for a design is twofold: the time to get the data ready for the hand-off to the mask writer is growing but also the time it actually takes to write the mask is heavily influenced by the size and complexity of the data. Different parameters are measures of how the flow and the particular tooling impact both portions. The efficiency of the data conversion flow conducted by a software tool can be measured by the output file size, the scalability of the computing during parallel processing on multiple processors and the total cpu-time for the transformation. The mask writing of a particular data set is affected by the file size and the shot count. The latter one is the total amount of shots that are required to expose all patterns on the mask. The shot count can be estimated based on the figure count by type and their dimensions. The results of the fracturing have an impact on the mask quality -- in particular the grid size and the number and locations of small figures.
The data volumes of individual files used in the manufacture of modern integrated circuits have become unmanageable using existing data formats specifications. The ITRS roadmap indicates that single layer MEBES files in 2002 reached the 50 GB range, worst case. Under the sponsorship of SEMI, a working group was formed to create a new format for use in describing integrated circuit layouts in a more efficient and extendible manner. This paper is a report on the status and potential benefits the new format can deliver.
As the industry enters the development of the 65nm node the presssure on the data path and atepout flow is growing. Design complexity and increased deployment of RET resut in rapidly growing file sizes, which turned the commodity of mask data preparation into a real bottleneck. Mask manufacturing starting with the 130nm nodes is accompanied by an increasing depoloyment of variable shaped beam (VSB) mask writing machines. This transition requires the adaptation of the established data preparation path to these circumstances. Historicially data has been presented mostly in MEBES or similar intermediate formats to the mask houses. Reformatting these data is a redundant operation, which in addition is not very efficient given the constraints of the intermediate formats. An alternate data preparation flow accommodating the larger files and re-gaining flexibility for TAT and throughput management downstream is suggested. This flow utilizes the hierarchicial gds-format as the exchange format in the mask data preparation. The introduction of a hierarchical exchange format enables the transfer of a number of necessary data preparation steps into the hierarchical domain. The paper illustrates the benefit of hierarchical processing based on gds-files with experimental data on file size reduction and TAT improvement for direct format conversions vs. re-fracturing as well as other processing steps. In contrast to raster scan mask making equipment, in a variable shaped beam mask writing machine the writing time and the ability to meet tight mask specification is affected by data preparation. Most critical are the control of the total shot count, file size and the efficient suppression of small figures. The paper will discuss these performance parameters and illustrate the desired practices.
Progressing integration and system-on-chip approaches increase the complexity of advznce designs. Data preparation, mask and wafer manufacturing have to cope with these designs while achieving high throughput and tight specifications. One of the biggest variables in a production mask processing flow is the actual design being produced. Layout variability can invalidate process settings by introducing conditions outside of the range the process is calibrated for. Characterization of how parameters such as density distributions, CD distributions, minimium, and maximum CD impact yield will no doubt remain proprietary. However, the ability to characterize a layout by these geometric parameters as well as lithographic parameters is a common need. Gathering this knowledge prior to the processing can contribute significantly to the efficiency of applying process recipes once the correlation has been made. The capabilities of a statistical layout analysis are demonstrated and practical applications in mask data preparation and manufacturing are discussed.
As design rules shrink aggressively while the wavelength reduction in the exposure equipment cannot keep up, extensive usage of resolution enhancement techniques (RET) has complicated the generation and handling of mask writing data. Consequently, file size growth and computing times for mask data preparation rise beyond feasibility. In order to address these issues, an integrated flow has been developed. It starts out with the gds-file delivered by the backend of design and combines optical proximity correction, design rule and mask process rule verification, and all other necessary steps for mask data preparation into a single flow. The benefits of this strategy are time savings in data processing and handling, the elimination of intermediate files, and the elimination of data format interface issues. Since the new flow takes full advantage of the design hierarchy, file sizes shrink considerably and the whole data preparation infrastructure can be simplified. The paper will describe the transition to the new flow and quantify the benefits.
KEYWORDS: Photomasks, Manufacturing, Data processing, Resolution enhancement technologies, Lithography, Process control, Data conversion, Data storage, Optical proximity correction, Data communications
As the industry enters the development of the 65nm node the pressure on the data path and tapeout flow is growing. Design complexity and increased deployment of resolution enhancement techniques (RET) result in rapidly growing file sizes, which turns what used to be the relatively simple task of mask data preparation into a real bottleneck. This discussion introduces the data preparation scheme in the mask house and analyzes its evolution. Mask data preparation (MDP) has evolved from a flow that only needed to support a single mask lithography tool data format (MEBES) with minimal data alteration steps to one which requires the support of many mask lithography tool data formats and at the same time requires significant data alteration to support the increased precision necessary for today’s advanced masks.. However, the MDP flow developed around the MEBES format and it’s derivatives still exists. The design community has migrated towards the use of hierarchical data formats and processes to control file size and processing time. MDP, which from a file size and process complexity point of view is beginning to look more and more like the advanced RET operations performed on the data prior to mask manufacturing, is still standardized on a flat data format that is poorly optimized for a growing number of mask lithography tools. Based on examples it will be shown how this complicates the data handling further.
An alternate data preparation flow accommodating the larger files and re-gaining flexibility for turnaround time (TAT) and throughput management is suggested. This flow utilizes the hierarchical GDS-II format as the exchange format for mask data preparation. It complements the existing flow for the most complex designs. The introduction of a hierarchical exchange format enables the transfer of a number of necessary data preparation steps into the hierarchical domain. Data processing strategies are discussed. The paper illustrates the benefit of hierarchical processing based on GDS-II files with experimental data on file size reduction and TAT improvement for direct format conversions vs. re-fracturing as well as other processing steps. The implications for the established data preparation approaches and potential alternatives for the communication between the mask manufacturer and the customer will be discussed. The potential for further enhancements by converting to a hierarchical format that has a more efficient data representation than the commonly used GDS-II format will be discussed and illustrated.
Critical features of a product layout like isolated structures and complicated two-dimensional situations including line ends have often a smaller process window compared to regular highly nested features. It has been observed that the application of optical proximity corrections (OPC) can create yet more aggressive layout situations. Although corrected layouts meet the target contour under optimal exposure conditions, the process window of these structures under non-optimal conditions is thereby potentially reduced. This increases the risk of shorts and opens in the resist images of the designs under non-optimal exposure conditions. The requirement from a lithographer's point of view is to conduct a correction that considers the process window aspect besides the desired target contour. The present study investigates a concept of using the over-dose and under-dose responses of the simulated image of an exposed structure to optimize the correction value. The simulations describing the lithographic imaging process are based on an enhanced variable threshold model (VTRE). The placement error of the simulated edge of a structure is usually corrected for the nominal dose and focus settings. In the new concept the effective edge placement error is defined as the average of the edge placement errors for the over-dose and the edge placement error for the under-dose conditions. If a specific layout has a very non-symmetric response to over-/under exposure for the evaluated condition, it is prone to a certain failure mechanism (open or short). Hence calculating the average of the edge placement errors will shift the effective correction towards a layout with larger process window. The paper evaluates this concept for 100 nm ground rules and 193 nm lithography conditions. Examples of corrected layouts are presented together with experimental data. The limitations of the approach are discussed.
In this work the effects of sub-resolution assist features (SRAF) on the process window and the target CD control are investigated for the 100nm node gate level. Using 2-dimensional lithographic simulations the process windows of critical isolated and dense structures are determined and the overlapping process window with a second, dense feature is computed. This is achieved using a novel scheme of simulations over a wide range of line widths for a large pitch range. This approach allows us to explore systematically and simultaneously the impact of line width bias, number of placed assists, spacing to the main feature and spacing between the assist features as well as the assist feature width over a large parameter space. The overlapping process window is optimized following two different strategies: The first strategy places the assists only considering the space between two features independent of their width while reaching the target values for the different feature width using line biases. The second strategy under investigation defines the assist feature parameters based on both the line width of the target feature as well as the space from the target feature to the neighboring feature. For both approaches the implication for target CD control is discussed.
Resolution enhancement techniques and higher NA exposure are employed to meet the lithography requirements imposed by aggressive shrinks to chip feature sizes. For certain critical levels, like storage and isolation patterning of DRAM devices, the capability to exactly reproduce the mask layout is limited. Severe corner rounding and line image shortening can occur. Such phenomena can be significant contributors to side effects like current leakage, inadequate retention time, stress, and perhaps yield loss. Our development work has shown that the use of Serif and Hammerhead structures can improve resolution printing. Moreover, better process latitude and CD control can be achieved. This paper gives an overview of these innovative techniques. It includes the consideration of different design layouts based on simulations, as well as mask making limitations e.g. mask inspection capability. The benefits of these techniques are discussed and illustrated with detailed lithographic performance data and SEM pictures.
Bonding of silicon wafers is a method that is widely used in microsystem technology. To quantify the quality of the bond only IR interference techniques which are restricted to vertical sizes of voids <EQ 250 nm have been applied up to now. The new idea is to use acoustic microscopy for the examination of these bonds. In order to be able to evaluate the possibilities and limitations of such a method we worked out a preparation technique to investigate bonded silicon wafers with defined etched structures. Etched wafers were bonded to nonetched wafers, chips of 10 X 15 mm2 size were sawed out of the wafer pair followed by grinding and polishing these structures under small angles of 34 feet to 3 degrees. With working frequencies of 200 MHz and 400 MHz we obtained good results with structures that have a height of 50 nm and a horizontal size of some micrometers. It was possible to show structures that were covered with a silicon layer that is 70 micrometers thick. Additionally wafer pairs with metallic interlayers were investigated. The results are compared with images taken with an IR transmission optical microscope.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.