Open Access Presentation + Paper
26 March 2019 Will stochastics be the ultimate limiter for nanopatterning?
Author Affiliations +
Abstract
Background: Moore’s Law has to-date governed the economics of lithography-driven scaling in semiconductor manufacturing, where lithography tools provide higher resolution and smaller addressable pixels while maintaining high throughput and lower cost per pixel. On the other hand, Tennant’s Law describes how lithographic throughput decreases dramatically as resolution is improved for a wide range of lithographic approaches.
Aim: Why is there a disconnect between the optical lithography that has enabled decades of Moore’s Law behavior and the many other lithographic techniques that seem to follow Tennant’s Law?
Approach: The answer lies with the concept of stochastic-limited lithography. By developing very simple scaling relationships, a physical explanation for Tennant’s Law can be provided. By applying this explanation to optical lithography, its past can be explained and its potential for future success examined.
Results: While optical lithography has not been stochastic-limited in the past (thus allowing it to avoid the fate of Tennant’s Law), in the future it will surely become stochastic-limited.
Conclusions: The answer to the title question “Will stochastics be the ultimate limiter for nanopatterning?” is clearly yes whenever throughput plays an important role in economic viability.
Conference Presentation

1.

Introduction: Moore’s Law versus Tennant’s Law

It’s hard to make things really small. It’s even harder to make things small cheaply. Despite this, the semiconductor industry excels at making things small, and making them cheaply. In fact, a key feature of Moore’s Law over the years has been that making things small results in them being cheaper to make. Consider a 1 Gb DRAM chip, which sells today for under US $0.5. The cost of lithography for making that chip is well under 50% of the cost of making the chip, and there are about 25 lithography layers required to build it. Thus, the cost of printing a billion features (each of which is under 100 nm in width) is under one penny.

To arrive at the point where printing a billion well-controlled nanoscopic features costs less than 1ȼ required over 50 years of continuous improvement in semiconductor mass production, and in particular lithography. It also requires extremely high-volume manufacturing, where billions of dollars invested in a fab are spread over billions of chips made. At a finer level of detail, the industry uses a lithography tool that costs over US $10M to purchase in order to print a billion features for under a penny. This is possible because of throughput: These $10M – $100M lithography tools can print one lithography layer on a 300mm diameter wafer containing several hundred chips in about 15 seconds. Throughput is a critical element to cost effectiveness in semiconductor manufacturing.

But throughput is not the only key to cost-effectiveness. In fact, resolution is also a critical component of the cost equation, since better resolution enables a chip to occupy less area on a wafer, which enables more chips to fit on a wafer, which lowers the cost per chip given a set cost per wafer. One can define resolution in a semiconductor manufacturing sense as the feature size that allows the fab to make the most money. Designing a chip using features larger than the “resolution” means that fewer chips are placed on a wafer, increasing the cost per chip. But designing a chip using features smaller than the “resolution” creates yield loss that means fewer working chips are made per wafer, also increasing the cost per chip. In fact, this principle was Gordon Moore’s great insight in his 1965 paper explaining why semiconductors were about to revolutionize the electronics industry (see Figure 1).1

Figure 1.

This figure from Moore’s original 1965 paper shows how increasing transistor density over time lowers the cost per transistor, the reason why Moore’s Law works. At one moment in time, increasing the number of components per integrated circuit lowers the cost per component due to increased component density, but only until further increases in density cause yield loss. The “sweet spot” of minimum cost improves with time due to technology improvements. Figure from Ref. 1.

00118_PSISDG10958_1095803_page_2_1.jpg

Throughput and resolution combine to define the lithography cost per working device, though of course many other factors come into play as well. As lithography tools for semiconductor manufacturing have increased in price, from $0.5M/tool in 1980 to $50M/tool in 2015, their throughput also increased by a factor of 100 so that the cost per unit area being printed stayed roughly the same. During this same time period, the resolution of these tools improved by about a factor of 40. Thus, the cost of printing one square resolution element (a “pixel”) has been reduced by more than 1,000 over that 35 year period, leading the way for Moore’s Law.

But lithography-driven advances in Moore’s Law have slowed. The resolving abilities of a state-of-the-art 193 nm immersion lithography tool has remained about constant (at 40 nm half-pitch) for the last 10 years, so that improvements in manufacturing resolution come at the cost of multiple patterning. The switch to Extreme Ultraviolet (EUV) lithography is far behind schedule, and the high cost of the EUV tools (greater than US $100M) and their low throughput mean that one EUV lithography exposure will cost the same as 3 – 4 immersion lithography exposures.

While the Moore’s Law trends described very briefly above have dominated the lithography landscape for semiconductor manufacturing, another interesting lithography trend was observed by Donald Tennant in the mid-1990s. Now dubbed Tennant’s Law, his survey of a wide range of lithography technologies showed that the areal throughput of a lithography tool was proportional to the resolution of that tool to the fifth power.2 In other words, even small improvements in resolution are accompanied by large reductions in throughput. According to Tennant’s Law, improving resolution be a factor of 2 should slow down the throughput of lithography by a factor of 32.

Figure 2.

Tennant’s Law as originally published in 1999, showing that areal throughput of lithography is proportional to resolution to the fifth power. Figure from Ref. 2.

00118_PSISDG10958_1095803_page_3_1.jpg

Optical lithography for semiconductor manufacturing is in apparent violation of Tennant’s Law, as first pointed out by Tim Brunner.3 While other lithography technologies improve the throughput of printing a minimum resolution pixel very slowly over time (if at all), optical lithography has been able to significantly improve the pixel throughput of its tools (Figure 3), pushing optical lithography further away from the general trend followed by other lithography approaches. While no doubt a large part of this dramatic throughput improvement comes from the billions of dollars invested by the semiconductor industry in its lithography technologies, I believe there are other factors at work. In this paper I will try to answer these questions: Why does Tennant’s Law show resolution to the power of five, and why do most lithography technologies follow this trend? Why has optical lithography been pushing further away from the Tennant’s Law trend, and will it continue to do so? The answer, we shall see, lies in the stochastic limitations of lithography.

Figure 3.

Brunner’s Corollary to Tennant’s Law shows that optical lithography has moved steadily away from the general Tennant’s Law trend. Figure from Ref. 3.

00118_PSISDG10958_1095803_page_4_1.jpg

2.

Stochastic-limited Lithography as an Explanation for Tennant’s Law

Tennant’s Law states that the areal throughput of a lithography technology (At, in nm2/sec) is related to the resolution of that lithography (R, in nm) according to

00118_PSISDG10958_1095803_page_4_2.jpg

where kT is the Tennant’s Law constant, equal to about 4.3 nm-3s-1 circa 1995. While kT is expected to increase over time as technology improves, Tennant’s law posits that all state-of-the-art lithography technologies follow approximately this resolution/throughput trade-off trend. The first obvious question to ask is, why the power of 5?

First note that in Figure 1 all techniques but optical lithography are direct write, where pixels of size R2 are written one at a time. One would expect the throughput of a direct write technology to be proportional to the number of pixels that must be written. Since the number of pixels per unit area equals 1/R2, the pixel throughput of a lithography tool, according to Tennant’s Law, becomes

00118_PSISDG10958_1095803_page_4_3.jpg

Thus, two of the five powers are easily explained. Now Tennant’s Law states that the pixel throughput of a lithography tool is proportional to the resolution cubed. Since R3 is a volume (in fact, a minimum printable volume, sometimes called a voxel), is there an explanation as to why the pixel throughput is proportional to the voxel size?

The explanation comes from the assumption that a state-of-the-art high-resolution lithography technology is fundamentally stochastic-limited. Think of the voxel as the smallest volume element that can be written by the lithography technology. The goal is then to “write” this voxel independent of whether its neighboring voxels have been written. In other words, lithography can be described as a writing technique where each voxel of material can be reliably written with a 0 or 1. The writing often takes the form of removing the material of the voxel or allowing it to remain (the volume of photoresist is developed away or not, for example), but other physical changes can be used as well.

For a given voxel, writing means causing a certain number of chemical or physical events to occur within that voxel. Let N1 be the minimum number of statistically independent events taking place in the voxel to cause it to be reliably written as a 1 (turned on). For example, a spot of electrons incident on a photoresist material will cause a certain number of chemical changes within a voxel (chain scissioning events, for example, that lead to the dissolution of the voxel in developer). Further, let N0 be the maximum number of these same events taking place in a voxel so that it will reliably not become a 1 (not develop away in developer, for example). Since writing one voxel inherently causes some number of chemical/physical events to occur in neighboring voxels intended to stay unwritten, the ratio N1/N0 helps determine the resolution of the writing technique.

For high-resolution lithography, the size of one voxel will be quite small and the number of events N1 will likewise be small. As a result, the actual number of events taking place in one voxel will vary stochastically even if the mean number of events is fixed at N1. Under reasonable assumptions these counting statistics are often Poisson, so that the variance in the number of events in one pixel equals the mean. Thus, for a fixed writing dose, the actual number of writing-based events taking place in one voxel will vary stochastically. Though the exact statistical distribution is not important to this argument, the assumption is that the relative variation in the number of events taking place in one voxel increases as N1 decreases.

The above qualifier “reliably” is quite important, since N1 must be large enough so that each written voxel will turn on some given percentage of the time, while N0 must be small enough to prevent that voxel from turning on some given percentage of the time. Since the actual number of chemical/physical events taking place in a voxel is a stochastic variable, we worry that occasionally a pixel intended to be written as a 1 will actually have too few chemical/physical events to turn it on. We would typically define reliability based on the fraction of pixels that fail to write properly (one per million, for example, or one per billion, etc.). If a lithography technology is stochastic-limited, then N1 is determined by the stochastic variability of the number of chemical/physical events per voxel and the desired reliability.

Consider as an example a writing mechanism where the number of writing events in one voxel follows a Poisson distribution with mean and variance N1. If N1 is greater than about 20, a Poisson distribution is well approximated by a Gaussian distribution. Let’s suppose that a voxel fails to be written whenever the actual number of writing events in that voxel falls below kN1 (k might be 0.5, for example). The needed reliability (say, one part per billion, ppb) dictates that the area of the lower tail of the Gaussian distribution must equal this reliability whenever the number of events falls below kN1. A 1 ppb reliability requirement corresponds to 6σ below the mean, but in general the reliability will correspond to mσ below the mean. Thus,

00118_PSISDG10958_1095803_page_5_1.jpg

or

00118_PSISDG10958_1095803_page_5_2.jpg

If m = 6 and k = 0.5, the minimum value for N1 is 144.

The writing reliability requirement determines N1. This in turn determines the dose needed per pixel to generate N1 events per voxel on average (also called the sensitivity of the writing approach). Dose is the intensity of the writing information (current in a beam of electrons, for example) times the time spent writing one pixel. Pixel throughput, equal to one over the time spent writing one pixel, is thus proportional to the writing intensity and inversely proportional to the required dose (sensitivity). In a stochastic-limited lithography technology, the sensitivity is limited by N1, and thus improvements in sensitivity beyond this limit are not possible. Only increases in writing intensity allow increases in throughput.

Consider how pixel throughput must scale with resolution for a stochastic-limited writing approach. Since the writing is stochastic-limited, N1 is fixed by the counting statistics and, importantly, is independent of the voxel size (and thus resolution R). If R is decreased while N1 remains fixed, the writing time must increase in proportion to 1/R3 to maintain an average of N1 events in that volume, assuming the writing intensity is held constant. Thus, pixel throughput must scale as R3 (i.e., Tennant’s Law) for any stochastic-limited lithography technique at a given writing intensity. Under the assumption that the highest writing intensity is already in use (since higher throughput has always been desirable), only improvements in technology can push to higher writing intensity and thus higher throughput. In other words, the Tennant’s Law constant kT is proportional to the writing intensity.

In less general terms the same argument presented above was first provided by Gregg Gallatin to show the trade-offs between resolution, stochastic-induced line-edge roughness, and resist sensitivity for optical lithography (the so-called RLS trade-off).4

To summarize, Tennant’s law is a natural consequence of stochastic-limited lithography, though the value of kT need not be the same for every lithographic technology. Further, kT is proportional to the writing intensity. Thus, if resolution is to be improved in a stochastic-limited regime, either throughput will suffer or the writing intensity must be increased to compensate. To keep the pixel throughput constant as resolution is improved by a factor of 2, the writing intensity (say, current in a direct-write e-beam tool) must increase by a factor of 8. If writing intensity cannot be improved (for example, due to space-charge effects in an e-beam system), then throughput will be reduced by a factor of 8. This unpleasant consequence of stochastic limitations is just as true for a multibeam (parallel exposure) lithography tool as for a single beam (serial exposure) lithography tool.

3.

Impact of Tennant’s Law on Optical Lithography

There are two reasons why optical lithography has not followed Tennant’s Law. First optical lithography is not a direct-write technique. A large area (the field size of the optical projection tool) is printed at once. For 30 years the field size (in pixels) has grown quite dramatically, due predominantly to reduced pixel size for a fixed field size. Today, an immersion scanner with numerical aperture (NA) = 1.35 and a wavelength of 193 nm can print 40 nm features over a field size of 8mmX26mm, though the minimum addressable pixel is about 56nmX56nm. This means that about 1011 pixels can be written in parallel, providing a huge throughput advantage over direct-write techniques. Note, however, that the number of pixels per exposure field has stayed constant in optical lithography for the last 10 years.

The second reason optical lithography has not followed Tennant’s Law is that, until recently, optical lithography has not been stochastic-limited. Thus, reductions in R did not require an increase in dose to provide the same mean number of events per voxel. This is now changing, especially as the industry prepares to adopt EUV lithography. There is little doubt that high EUV tool cost and low EUV light source power has forced EUV lithography into a stochastic-limited regime.5 In this regime, throughput will scale as R3 if the light source intensity stays fixed. To keep throughput the same as resolution is improved, the light source intensity must increase. For example, shrinking feature sizes by 0.7 (a typical technology node shrink in the past) will require the light source intensity to increase 3X. A more modest node shrink of 0.8 (more likely in today’s world of limited lithography expectations) will still require a 2X light source intensity increase to keep the throughput the same.

Consider, for example, current plans to increase the NA of EUV lithography tools from 0.33 to 0.55. This increase in resolving capability could enable a 0.6X reduction in feature size. But since EUV lithography is stochastic-limited, this resolution improvement must be accompanied by a 4.6X increase in source intensity to keep the same throughput. Additionally, current high-NA designs require the field size to be cut in half (though partially compensated by higher optical transmission of the projection system), so that constant throughput will require an even greater increase in source power.

After many years defying Tennant’s Law, optical lithography will now face the same ugly trade-off of throughput versus resolution that has bedeviled so many other lithography techniques.

4.

Conclusions

Moore’s Law is a complex economic trend that results in lower cost per electronic component as a greater number of electronic components are integrated into a single device. Throughout most of the 50+ years since Gordon Moore described this trend, lithographic scaling has been the dominate driver to achieve lower cost per component.6 Put in simple terms, lithography’s improved resolution has allowed lower cost per printed pixel by keeping throughput high as resolution is improved.

Consider as a counterexample the use of a stochastic-limited lithography technology that follows Tennant’s Law. Even assuming a robust parallel processing scheme where the number of pixels written at once grows as 1/R2, throughput will be reduced by 8X for every 2X improvement in resolution unless writing intensity grows to compensate. Given the historical scaling pace of 2X improvement in resolution every 6 years, constant throughput would require a doubling of the writing intensity every 2 years – a daunting task.

In other words, Moore’s Law has so far depended on the use of a lithography technology that is not stochastic-limited. As those days come to an end, Tennant’s Law will compete with Moore’s Law over economic dominance in lithography-driven scaling. The answer to the title question “Will stochastics be the ultimate limiter for nanopatterning?” is clearly yes whenever throughput plays an important role in economic viability.

References

1 

G. E. Moore, “Cramming More Components onto Integrated Circuits,” Electronics, 38 (8), 114 –117 (1965). Google Scholar

2 

Donald M. Tennant, “Limits of Conventional Lithography,” Nanotechnology, 164 Springer, p.1999). Google Scholar

3 

T. A. Brunner, “Why optical lithography will live forever,” Journal of Vacuum Science and Technology B, 21 (6), 2632 (2003). Google Scholar

4 

Gregg M. Gallatin, “Resist blur and line edge roughness,” Optical Microlithography XVIII, SPIE, 5754 38 (2005). Google Scholar

5 

Chris A. Mack, “Reducing Roughness in Extreme Ultraviolet Lithography,” J. Micro/Nanolith. MEMS MOEMS, 17 (4), 041006 (2018). Google Scholar

6 

Chris A. Mack, “Fifty Years of Moore’s Law,” IEEE Transactions on Semiconductor Manufacturing, 24 (2), 202 –207 (2011). Google Scholar
© (2019) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Chris A. Mack "Will stochastics be the ultimate limiter for nanopatterning?", Proc. SPIE 10958, Novel Patterning Technologies for Semiconductors, MEMS/NEMS, and MOEMS 2019, 1095803 (26 March 2019); https://doi.org/10.1117/12.2517598
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Lithography

Optical lithography

Stochastic processes

Image resolution

Extreme ultraviolet lithography

Reliability

Nanostructures

Back to Top