The purpose of this paper is to investigate the impact of advanced immersion lithography process for the development of polarization optics at pixel level on CMOS image sensors. In the first part of this paper, we use Bloch formalism to define regimes that depend on the number of propagative Bloch modes within the structure. The presented analysis gives estimations of required features size to operate in NIR and visible range. The second part of this paper present optical characterization of silicon lamellar grating made on 300 mm wafer using advanced immersion lithography. Characterization results are discussed with respect to optical simulations and reconstructed grating profile is compared to patterning features estimated during first part.
In recent years, we have seen the development of integrated plenoptic sensors, where multiple pixels are placed under one microlens. It is mainly used by cameras and smartphones to drive the autofocus of the main lens, and it often takes the form of dual-pixels with 2 rectangular sub-pixels. We study the evolution of dual-pixels, the so-called quad-pixel sensor with 2x2 square sub-pixels under the microlens. As it is a simple light field capturing device, we investigate the computational photography abilities of such sensor. We first present our work on pixel-level simulations, then we present a model of image formation taking into account the diffraction by the microlens. Finally, we present new ways to process a quad-pixel images based on deep learning.
CMOS imaging has experienced significant developement in the last decades. At the center of this progress lies the pixel, composed of a light sensitive area (photodiode) coupled to a network of transistors. As the pixels sizes shrink, the light sensitive area gets smaller and requires light focusing assistance. To address this issue, microlenses are added to the top of the pixels stack. The microlenses are made of polymer resist transparent to the wavelength of interest. Creating such structures is not straightforward and requires complex process steps, especially when arrays of multiple shapes and sizes are needed. The grayscale approach appears as a promising alternative since this unconventional lithography method can produce variable shapes and sizes in a single lithography step. Mask data preparation is the most critical step for grayscale lithography. A widespread strategy is to experimentally establish the relationship between a given dose (corresponding to a specific chromium density on the mask) and the remaining resist thickness after development. The relationship, also known as contrast curve, is used as a transfer function to compute a suitable mask for the given resist. Our approach is to create a simplified grayscale model able to predict the resist response under any given mask and illumination condition. Using the classic contrast curve approach we have designed a mask composed of sub 5μm patterns and evaluated the resist profile prediction of the contrast curve approach compared to our grayscale model on various patterns including microlenses, pyramids and bowl shapes. Reults show that the contrast curve approach is no longer appropriate when the dimensions reduce below 5μm.
Unlike photographic image sensors with infrared cutoff filter, low light image sensors gather light over visible and near infrared (VIS-NIR) spectrum to improve sensitivity. However, removing infrared cutoff filter makes the color rendering challenging. In addition, no color chart, with calibrated infrared content, is available to compute color correction matrix (CCM) of such sensors. In this paper we propose a method to build a synthetic color chart (SCC) to overcome this limitation. The choice of chart patches is based on a smart selection of spectra from open access and our own VIS-NIR hyperspectral images databases. For that purpose we introduce a fourth cir dimension to CIE-L*a*b* space to quantify the infrared content of each spectrum. Then we uniformly sample this L*a*b*cir space, leading to 1498 spectra constituting our synthetic color chart. This new chart is used to derive a 3x4 color correction matrix associated to the commercial RGB-White sensor (Teledyne-E2V EV76C664) using a classical linear least square minimization.. We show an improvement of signal to noise ratio (SNR) and color accuracy at low light level compared to standard CCM derived using Macbeth color chart.
Development of small pixels for high resolution image sensors implies a lot of challenges. A high level of performance
should be guaranteed whereas the overall size must be reduced and so the degree of freedom in design
and process. One key parameter of this constant improvement is the knowledge and the control of the crosstalk
between pixels. In this paper, we present an advance in crosstalk characterization method based on the design of
specific color patterns and the measurement of quantum efficiency. In a first part, we describe the color patterns
designed to isolate one pixel or to simulate un-patterned colored pixels. These patterns have been implemented
on test-chip and characterized. The second part deals with the characterization setup for quantum efficiency.
Indeed, the use of spectral measurements allows us to discriminate pixels based on the color filter placed on
top of them and to probe the crosstalk as a function of the depth in silicon, thanks to the photon absorption
length variation with the wavelength. In the last part, results are presented showing the impact of color filters
patterning, i.e. pixels in a Bayer pattern versus un-patterned pixels. The crosstalk directions and amplitudes
are also analyzed in relation to pixel layout.
The evaluation of sensor's performance in terms of signal-to-noise ratio (SNR) is a big challenge for both camera
phone manufacturers and customers. The first ones want to predict and assess the performance of their pixel
while the seconds require being able to benchmark raw sensors and processing pipes. The Reference SNR metric is
very sensitive to crosstalk whereas for low-light issue, the weight of sensitivity should be increased. To evaluate
noise on final image, the analytical calculation of SNR on luminance channel has been performed by taking
into account noise correlation due to the processing pipe. However, this luminance noise does not match the
perception of human eye which is also sensitive to chromatic noise. Alternative metrics have been investigated to
find a visual noise metric closer to the human visual system. They have been computed on five pixel technologies
nodes with different sensor resolutions and viewing conditions.
The image quality evaluation of CMOS sensors is a big challenge for camera module manufacturers. In this paper,
we present an update of the Image Quality Evaluation Tool, a graphical user interface simulating image sensors
to assess the performance of a pixel. The simulated images are computed from operating conditions and sensor's
characteristics like Quantum Efficiency including off-axis effect. Simulation of QE off-axis impact has been based
on characterization data. The method does not require optics, making it suitable for early design phases as for
optimizations and investigations. Both measurement and implementation in the tool will be explained. The QE
degradation with angle effect will be highlighted on simulated images. A uniform gray scene or coloured image
simulation from QE off-axis measurement will help engineers to calculate post-processing digital correction like
colour shading correction or colour correction matrix versus pixel position.
The current CMOS image sensors market trend leads to achieve good image resolution at small package size and price,
thus CMOS image sensors roadmap is driven by pixel size reduction while maintaining good electro-optical
performances. As both diffraction and electrical effects become of greater importance, it is mandatory to have a
simulation tool able to early help process and design development of next generation pixels.
We have previously introduced and developed FDTD-based optical simulations methodologies to describe diffraction
phenomena. We recently achieved to couple them to an electrical simulation tool to take into account carrier diffusion
and precise front-end process simulation. We propose in this paper to show the advances of this methodology.
After having detailed the complete methodology, we present how we reconstruct the spectral quantum efficiency of a
pixel. This methodology requires heavy-to-compute realistic 3D modeling for each wavelength: the material optical
properties are described over the full spectral bandwidth by a multi-coefficient model, while the electrical properties are
set by the given process and design. We optically simulate the propagation of a dozen of wavelengths at normal
incidence and collect the distribution of the optical generation then we insert it in the electrical simulation tool and
collect the final output quantum efficiency.
Besides, we compare the off-axis performance evaluations of a pixel by simulating its relative illumination in a given
wavelength. In this methodology several plane waves are propagated with different angles of incidence along a specific
direction.
Microlens arrays are used on CMOS image sensors to focus incident light onto the appropriate photodiode and thus
improve the device quantum efficiency. As the pixel size shrinks, the fill factor of the sensor (i.e. ratio of the
photosensitive area to the total pixel area) decreases and one way to compensate this loss of sensibility is to improve the
microlens photon collection efficiency. This can be achieved by developing zero-gap microlens processes. One elegant
solution to pattern zero-gap microlenses is to use a grayscale reticle with varying optical densities which locally
modulate the UV light intensity, allowing the creation of continuous relief structure in the resist layer after development.
Contrary to conventional lithography for which high resist contrast is appreciated to achieve straight resist pattern
profiles, grayscale lithography requires smooth resist contrast curve. In this study we demonstrate the efficiency of
grayscale lithography to generate sub-2μm diameter microlens with a positive-tone photoresist. We also show that this
technique is resist and process (film thickness, development normality and exposure conditions) dependent. Under the
best conditions, spherical zero-gap microlenses as well as aspherical and off-axis microlenses, which are impossible to
obtain with the conventional reflow method, were obtained with satisfying process latitude.
The evaluation of CMOS sensors performance in terms of color accuracy and noise is a big challenge for camera
phone manufacturers. On this paper, we present a tool developed with Matlab at STMicroelectronics which
allows quality parameters to be evaluated on simulated images. These images are computed based on measured
or predicted Quantum Efficiency (QE) curves and noise model. By setting the parameters of integration time and
illumination, the tool optimizes the color correction matrix (CCM) and calculates the color error, color saturation
and signal-to-noise ratio (SNR). After this color correction optimization step, a Graphics User Interface (GUI)
has been designed to display a simulated image at a chosen illumination level, with all the characteristics of a
real image taken by the sensor with the previous color correction. Simulated images can be a synthetic Macbeth
ColorChecker, for which reflectance of each patch is known, or a multi-spectral image, described by the reflectance
spectrum of each pixel or an image taken at high-light level. A validation of the results has been performed with
ST under development sensors. Finally we present two applications one based on the trade-offs between color
saturation and noise by optimizing the CCM and the other based on demosaicking SNR trade-offs.
In this paper, we present the results of rigorous electromagnetic broadband simulations applied to CMOS image sensors
as well as experimental measurements. We firstly compare the results of 1D, 2D, and 3D broadband simulations in the
visible range (380nm-720nm) of a 1.75μm CMOS image sensor, emphasizing the limitations of 1D and 2D simulations
and the need of 3D modeling, particularly to rigorously simulate parameters like Quantum Efficiency. Then we illustrate
broadband simulations by two proposed solutions that improve the spectral response of the sensor: an antireflective
coating, and the reduction of the optical stack. We finally show that results from experimental measurements are in
agreement with the simulated results.
In this paper, we present a versatile characterization method we developed at STMicroelectronics for off-axis pixels (i.e. over the image plane) on CMOS image sensor. The solution does not require optics, making it suitable for early design phases as for optimizations and investigations. It is based on a specific design of color filters and microlens masks, which consists in several blocks. Inside each block, the filters and the microlens are shifted by a given amount, relatively to the pixel. Each block is related to a given chief ray and then defines a point in the chief ray angle space. Then, the performances of these angular points can be measured by rotating the sensor, using conventional uniform illumination setup with controlled f-number. Then it is possible to map these data on the image plane, knowing the chief ray angle versus focal plane coordinate function. Finally, we present some characterizations and optimizations based on the fact that the shift is arbitrary defined during circuit layout step, so it is possible to test the sensor with higher chief ray angles than those present in the product, or to optimize the shift of the microlens versus the chief ray angle for a given pixel architecture.
This paper presents a new FDTD-based optical simulation model dedicated to describe the optical performances of CMOS image sensors taking into account diffraction effects.
Following market trend and industrialization constraints, CMOS image sensors must be easily embedded into even smaller packages, which are now equipped with auto-focus and short-term coming zoom system. Due to miniaturization, the ray-tracing models used to evaluate pixels optical performances are not accurate anymore to describe the light propagation inside the sensor, because of diffraction effects. Thus we adopt a more fundamental description to take into account these diffraction effects: we chose to use Maxwell-Boltzmann based modeling to compute the propagation of light, and to use a software with an FDTD-based (Finite Difference Time Domain) engine to solve this propagation.
We present in this article the complete methodology of this modeling: on one hand incoherent plane waves are propagated to approximate a product-use diffuse-like source, on the other hand we use periodic conditions to limit the size of the simulated model and both memory and computation time. After having presented the correlation of the model with measurements we will illustrate its use in the case of the optimization of a 1.75&mgr;m pixel.
This paper describes a new methodology we have developed for microlens optimization for CMOS image sensors in order to achieve good optical performances. On one hand, the real pixel is simulated in an optical simulation software and on the other hand simulation results are post-processed with a numerical software.
In a first part, we describe our methodology. We start from the pixel layout description from standard micro-electronic CAD software and we generate a three-dimensional model on an optical ray tracing software. This optical model aims to be as realistic as possible taking into account the geometrical shape of all the components of the pixel and the optical properties of the materials. A specific ray source has also been developed to simulate the pixel illumination in real conditions (behind an objective lens). After the optical simulation itself, the results are transferred to another software for more convenient post-processing where we use as photosensitive area a weighted surface determined from the fit of angular response simulation results to the measurements. Using this surface we count the ray density inside the substrate to evaluate the simulated output signal of the sensor.
Then we give some results obtained with that simulation process. At first, the optimization of the microlens parameters for different pixel pitches (from 5.6um to 4um). We also have studied the polarization effects inside the pixel. Finally, we compare the measured and the simulated vignetting of the sensor, demonstrating the relevance of our optical simulation process and allowing us to study solutions for a pixel pitch of 3μm and less.
CMOS imagers are commonly employing pinned photodiode pixels and true correlated double sampling to eliminate kTC noise and achieve low noise performance. Low noise performance also depends on optimisation of the readout circuitry. This paper investigates the effect of the pixel source follower transistor on the overall noise performance through several characterization methods. The characterization methods are described, and experimental results are detailed. It is shown that the source follower noise can be the limiting factor of the image sensor and requires optimisation.
We briefly recall the principle of the polychromatic laser guide star, which aims at providing measurements of the tilt of incoming wavefronts with a 100% sky coverage, We describe the main results of the feasibility study of this concept undertaken within the ELP-OA porgramme. We finally summarize our plans for a full demonstrator at Observatoire de Haute-Provence.
We describe the current status of the ELP-OA project in which we try to demonstrate in practice that it is possible to measure the tilt of a wave front using only a polychromatic laser guide star and no natural guide star. The first phase of ELP-OA, consisting of feasibility experiments, has recently been completed successfully. This paper provides an overview over the results of this first phase and over the continuation of the ELP-OA project.
Adaptive optics at astronomical telescopes aims at correcting in real time the phase corrugations of incoming wavefronts caused by the turbulent atmosphere, as early proposed by Babcock. Measuring the phase errors requires a bright source located within the isoplanatic patch of the program source. The probability that such a reference source exists is a function of the wavelength, of the required image quality (Strehl ratio), of the turbulence optical properties, and of the direction of the observation. It turns out that the sky coverage is disastrously low in particular in the visible wavelength range where, unfortunately, the gain in spatial resolution brought by adaptive optics is the largest. Foy and Labeyrie have proposed to overcome this difficulty by creating an artificial point source in the sky in the direction of the observation relying on the backscattered light due to a laser beam. This laser guide star (hereinafter referred to as LGS) can be bright enough to allow us to accurately measure the wavefront phase errors, except for two modes which are the piston (not relevant in this case) and the tilt. Pilkington has emphasized that the round trip time of the laser beam to the mesosphere, where the LGS is most often formed, is significantly shorter than the typical tilt coherence time; then the inverse-return-of-light principle causes deflections of the outgoing and the ingoing beams to cancel. The apparent direction of the LGS is independent of the tilt. Therefore the tilt cannot be measured only from the LGS. Until now, the way to overcome this difficulty has been to use a natural guide star to sense the tilt. Although the tilt is sensed through the entire telescope pupil, one cannot use a faint source because $APEX 90% of the variance of the phase error is in the tilt. Therefore, correcting the tilt requires a higher accuracy of the measurements than for higher orders of the wavefront. Hence current adaptive optics devices coupled with a LGS face low sky coverage. Several methods have been proposed to get a partial sky coverage for the tilt. The only one providing us with a full sky coverage is the polychromatic LGS (hereafter referred to as PLGS). We present here a progress report of the R&D program Etoile Laser Polychromatique et Optique Adaptative (ELP-OA) carried out in France to develop the PLGS concept. After a short recall of the principles of the PLGS, we will review the goal of ELP-OA and the steps to get over to bring it into play. We finally shortly described the effort in Europe to develop the LGS.
Adaptive optics at astronomical telescopes aims at correcting in real time the phase corrugations of incoming wavefronts caused by the turbulent atmosphere, as early proposed by Babcock. Measuring the phase errors requires a bright source, which is located within the isoplanatic patch of the program source. The probability that such a reference source exists is a function of the wavelength of the observation, of the required image quality (Strehl ratio), of the turbulence optical properties, and of the direction of the observation. Several papers have addressed the problem of the sky coverage as a function of these parameters (see e.g.: Le Louarn et al). It turns out that the sky coverage is disastrously low in particular in the short (visible) wavelength range where, unfortunately, the gain in spatial resolution brought by adaptive optics is the largest. Foy and Labeyrie have proposed to overcome this difficulty by creating an artificial point source in the sky in the direction of the observation relying on the backscattered light due to a laser beam. This laser guide star (hereafter referred to as LGS) can be bright enough to allow us to accurately measure the wavefront phase errors, except for two modes which are the piston (which is not relevant in this case) and the tilt. Pilkington has emphasized that the round trip time of the laser beam to the mesosphere, where the LGS is most often formed, is significantly shorter than the typical tilt coherence time; then the inverse-return- of-light principle causes deflections of the outgoing and the ingoing beams to cancel. The apparent direction of the LGS is independent of the tilt. Therefore the tilt cannot be measured only from the LGS. Until now, the way to overcome this difficulty has been to use a natural guide star to sense the tilt. Although the tilt is sensed through the entire telescope pupil, one cannot use a faint source because approximately equals 90% of the variance of the phase error is in the tilt. Therefore, correcting the tilt requires a higher accuracy of the measurements than for higher orders of the wavefront. Hence current adaptive optics devices coupled with a LGS face low sky coverage. Several methods have been proposed to get a partial or total sky coverage for the tilt, such as the dual adaptive optics concept, the elongation perspective method, or the polychromatic LGS (hereafter referred to as PLGS). We present here a progress report of the R&D program Etoile Laser Polychromatique et Optique Adaptative (ELP-OA) carried out in France to develop the PLGS concept. After a short recall of the principles of the PLGS, we will review the goal of ELP-OA and the steps to get over to bring it into play.
In this paper we present an experiment for measuring the chromatic differences of the tilt used for polychromatic laser guide star, a suitable solution to overcome the monochromatic laser guide star limitation: the tilt indetermination. A comparative study between two types of data processing is done: the classical estimation of angle of arrival by the image center of gravity, and a new one: an estimation of tilts by fitting a phase map in the polychromatic case. From these studies, expected precision is derived and comparison between simulations and data is done.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.