Overlay control continues to be a critical aspect of successful semiconductor lithography processing, with overlay control systems becoming more and more elaborate to meet the requirements of advanced semiconductor nodes. Sampling optimization is especially important including the number of overlay measurements to perform on each wafer, the number of wafers to measure per lot, and where exactly to measure on each wafer. Conventional sampling optimization methodology is to collect dense data for a short period and use this data to optimize the locations to measure on the wafer. In recent years, rule-based sampling was introduced to relax this data requirement and improve the time to result. However, in both scenarios, one single sample plan is generated in offline optimization, which is then used in high volume manufacturing (HVM) without change, hence named “static sampling”. In this paper, we introduce a “dynamic sampling” approach, where multiple rule-based sample plans are generated, that complement each other by measuring different locations on the wafer, while meeting spatial and population balancing criteria. These sample plans can then be used in an alternating manner on a per-wafer basis (wafer-by-wafer dynamic sampling) and per-lot basis (lot-by-lot dynamic sampling) in HVM. In this paper, we first demonstrate the risks and the inherent trade-offs associated with static sampling by using overlay budget breakdown and best/worst case advanced process control (APC) simulations. We then characterize the overlay control improvement potential of dynamic sampling schemes through APC simulations using multiple metrics: on-product overlay, rework overlay and monitoring accuracy. Finally, we calculate the on-product overlay versus throughput cost function analysis and determine which dynamic sampling scheme is the most useful for which throughput conditions.
In advanced technology nodes, the focus window becomes tighter to achieve smaller CD features while maintaining or improving product yields. During the past decades, focus spot monitoring (FSM) has been a critical topic in high-volume manufacturing, not only for minimizing the contamination impact on focus performance but also for scanner productivity concerns if wafer table cleaning needs to be executed. Although there is a dedicated FSM option combined with automatic wafer table cleaning from the exposure tools, the users often need to be careful to design the threshold and monitor the area by different products and layers, to prevent false positive alarmsthat impact the productivity of scanners. In some cases, a small focus spot threshold can cause more false positive alarms at the wafer edge area due to the edge roll-off effect on the wafer table and steep wafer topography, which brings difficulty to detecting small focus spots due to contamination. In our study, we compare the classic FSM provided by exposure tools to a newly developed automated FSM mechanism. There are several mathematical steps and approaches implemented into our new type of FSM to reduce false positive focus spot alarms. For comparison, we evaluated the performance of classic and new FSM methods on different layers, which showed special topography, edge roll-off effect, or strong intra-field signature. Finally, a new robust and user-friendly FSM method has been demonstrated and proven that even with a tight threshold, the false positive alarm especially around the wafer edge area can be fully eliminated.
In leading edge lithography, overlay is usually controlled by feedback based on measurements on overlay targets, which are located between the dies. These measurements are done directly after developing the wafer. However, it is well-known that the measurement on the overlay marks does not always represent the actual device overlay correctly. This can be due to different factors, including mask writing errors, target-to-device differences and non-litho processing effects, for instance by the etch process.1
In order to verify these differences, overlay measurements are regularly done after the final etch process. These post-etch overlay measurements can be performed by using the same overlay targets used in post-litho overlay measurement or other targets. Alternatively, they can be in-device measurements using electron beam measurement tools (for instance CD-SEM). The difference is calculated between the standard post-litho measurement and the post-etch measurement. The calculation result is known as litho-etch overlay bias.
This study focuses on the feasibility of post-etch overlay measurement run-to-run (R2R) feedback instead of post-lithography R2R feedback correction. It is known that the post-litho processes have strong non-linear influences on the in-device overlay signature and, hence, on the final overlay budget. A post-etch based R2R correction is able to mitigate such influences.2
This paper addresses several questions and challenges related to post-etch overlay measurement with respect to R2R feedback control. The behavior of the overlay targets in the scribe-line is compared to the overlay behavior of device structures. The influence of different measurement methodologies (optical image-based overlay vs. electron microscope overlay measurement) was evaluated. Scribe-line standard overlay targets will be measured with electron microscope measurement. In addition, the influence of the intra-field location of the targets on device-to-target shifts was evaluated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.