Advances in chip manufacturing processes continue to increase the number of vertices to deal with in Optical Proximity Correction (OPC) and Mask Data Preparation (MDP). In order to manage the processing time, OPC in sub-nanometer now requires employing tens of thousands of CPU cores. A slight mishap can force the job to be re-run. Increased complexity in the computing environment, along with the length of time a job spends in it, makes a job more susceptible to various sources of failures, while the re-run penalty is also growing with the design complexity. Checkpointing is a technique that saves the state of a system that continuously transitions to another state. The purpose of creating a checkpoint for a job is to make it possible to restart an interrupted job from the saved state at a later time. With checkpointing, if a running job encounters termination before its normal completion, be it due to a hardware failure or a human intervention to make resources available for an urgent job, it can resume from the checkpoint closest to the point of termination and continue to its completion. Checkpointing applications can include 1) resume flow where a terminated job can be resumed from the point close to the termination, 2) repair flow to fix OPC hotspot areas without having to process the entire chip and 3) cloud usage where job migration may be necessary. In this paper, we study the runtime and storage impact with checkpointing and demonstrate virtually no runtime impact and very minimal increase in filer storage. Furthermore, we show how checkpointing can significantly reduce runtime in OPC hotspot repair applications
As the computational requirements for post tape out (PTO) flows increase at the 7nm and below technology nodes, there is a need to increase the scalability of the computational tools in order to reduce the turn-around time (TAT) of the flows. Utilization of design hierarchy has been one proven method to provide sufficient partitioning to enable PTO processing. However, as the data is processed through the PTO flow, its effective hierarchy is reduced. The reduction is necessary to achieve the desired accuracy. Also, the sequential nature of the PTO flow is inherently non-scalable. To address these limitations, we are proposing a quasi-hierarchical solution that combines multiple levels of parallelism to increase the scalability of the entire PTO flow. In this paper, we describe the system and present experimental results demonstrating the runtime reduction through scalable processing with thousands of computational cores.
The Mask Data Correctness Check (MDCC) is a reticle-level, multi-layer DRC-like check evolved from mask rule
check (MRC). The MDCC uses extended job deck (EJB) to achieve mask composition and to perform a detailed check
for positioning and integrity of each component of the reticle. Different design patterns on the mask will be mapped to
different layers. Therefore, users may be able to review the whole reticle and check the interactions between different
designs before the final mask pattern file is available. However, many types of MDCC check results, such as errors from
overlapping patterns usually have very large and complex-shaped highlighted areas covering the boundary of the design.
Users have to load the result OASIS file and overlap it to the original database that was assembled in MDCC process on
a layout viewer, then search for the details of the check results. We introduce a quick result-reviewing method based on
an html format report generated by Calibre® RVE. In the report generation process, we analyze and extract the essential
part of result OASIS file to a result database (RDB) file by standard verification rule format (SVRF) commands.
Calibre® RVE automatically loads the assembled reticle pattern and generates screen shots of these check results. All the
processes are automatically triggered just after the MDCC process finishes. Users just have to open the html report to
get the information they need: for example, check summary, captured images of results and their coordinates.
The mask composition checking flow is an evolution of the traditional mask rule check (MRC). In order to differentiate
the flow from MRC, we call it Mask Data Correctness Check (MDCC). The mask house does MRC only to identify
process limitations including writing, etching, metrology, etc. There still exist many potential errors that could occur
when the frame, main circuit and dummies all together form a whole reticle. The MDCC flow combines the design rule
check (DRC) and MRC concepts to adapt to the complex patterns in today’s wafer production technologies. Although
photomask data has unique characteristics, the MRC tool in Calibre® MDP can easily achieve mask composition by using
the Extended MEBES job deck (EJB) format. In EJB format, we can customize the combination of any input layers
in an IC design layout format, such as OASIS. Calibre MDP provides section-based processing for many standard verification
rule format (SVRF) commands that support DRC-like checks on mask data. Integrating DRC-like checking with
EJB for layer composition, we actually perform reticle-level DRC, which is the essence of MDCC. The flow also provides
an early review environment before the photomask pattern files are available. Furthermore, to incorporate the
MDCC in our production flow, runtime is one of the most important indexes we consider. When the MDCC is included
in the tape-out flow, the runtime impact is very limited. Calibre, with its multi-threaded processes and good scalability, is
the key to achieving acceptable runtime. In this paper, we present real case runtime data for 28nm and 14nm technology
nodes, and prove the practicability of placing MDCC into mass production.
With each new process technology node chip designs increase in complexity and size, and mask data prep flows require
more compute resources to maintain the desired turn around time (TAT). In addition, to maintaining TAT, mask data
prep centers are trying to lower costs. Securing highly scalable processing for each element of the flow - geometry
processing, resolution enhancements and optical process correction, verification and fracture - has been the focal point
so far towards the goal of lowering TAT. Processing utilization for different flow elements is dependent on the
operation, the data hierarchy and device type. In this paper we pursue the introduction of a dynamic utilization driven
compute resource control system applied to large scale parallel computation environment. The paper will explain the
performance challenges in optimizing a mask data prep flow for TAT and cost while designing a compute resource
system and its framework. In addition, the paper will analyze performance metrics TAT and throughput of a production
system and discuss trade-offs of different parallelization approaches in data processing in interaction with dynamic
resource control. The study focuses on 65nm and 45nm process node.
As tolerance requirements for the lithography process continue to shrink, the complexity of the optical proximity
correction is growing. Smaller correction grids, smaller fragment lengths and the introduction of pixel-based simulation
lead to highly fragmented data fueling the trend of larger file sizes as well as increasing the writing times of the vector
shaped beam systems commonly used for making advanced photomasks. This paper will introduce an approach of
layout modifications to simplify the data considering both fracturing and mask writing constraints in order to make it
more suitable for these processes. The trade-offs between these simplifications and OPC accuracy will be investigated.
A data processing methodology that allows preserving the OPC accuracy and modifications all the way to the mask
manufacturing will also be described. This study focuses on 65nm and 45nm designs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.