Effects of light intensity on disparity for depth extraction in monochrome CMOS image sensor with offset pixel apertures are investigated. The technology consumes less power, since it does not use external light sources. The offset pixel apertures are integrated in each pixel of the monochrome CMOS image sensor to acquire the disparity for depth extraction. Because the monochrome CMOS image sensor does not contain color filters, the height of the pixel is lower than that of the CMOS image sensor with color filters, resulting in a better disparity. The monochrome CMOS image sensor with offset pixel apertures was designed and fabricated using 0.11 μm CMOS image sensor process. Disparity of the sensor has been measured under various light intensities. The sensor might be useful for three-dimensional imaging in outdoor applications with a simple structure.
A CMOS image sensor with off-center circular apertures for two-dimensional (2D) and three-dimensional (3D) imaging was fabricated, and its performance was evaluated, including the results of 2D and 3D images. The pixel size, based on a four-transistor active pixel sensor with a pinned photodiode, is 2.8 μm × 2.8 μm. Disparate images as well as focused images for depth calculation can be obtained using the designed pixel pattern. The pixel pattern is composed of one white subpixel with a left-offset circular aperture, a blue pixel, a red pixel, and another white subpixel with a right-offset circular aperture. The proposed technique was verified by simulation and measurement results using a point light source. In addition, the depth image was implemented by calculating the depth information from the 2D images.
Effects of aperture size on the performance of CMOS image sensor with pixel aperture for depth extraction are investigated. In general, the aperture size is related to the depth resolution and the sensitivity of the CMOS image sensor. As the aperture size decreases, the depth resolution is improved and the sensitivity decreases. To optimize the aperture size, optical simulation using the finite-difference time-domain method was implemented. The optical simulation was performed with various aperture sizes from 0.3 μm to 1.1 μm and the optical power with the incidence angle as a function of the aperture size was evaluated. Based on the optical simulation results, the CMOS image sensor was designed and fabricated using 0.11 μm CMOS image sensor process. The effects of aperture size are investigated by comparison of the simulation and the measurement results.
The 3-dimensional (3D) imaging is an important area which can be applied to face detection, gesture recognition, and 3D reconstruction. Many techniques have been reported for 3D imaging using various methods such as time of fight (TOF), stereo vision, and structured light. These methods have limitations such as use of light source, multi-camera, or complex camera system. In this paper, we propose the offset pixel aperture (OPA) technique which is implemented on a single chip so that the depth can be obtained without increasing hardware cost and adding extra light sources. 3 types of pixels including red (R), blue (B), and white (W) pixels were used for OPA technique. The aperture is located on the W pixel, which does not have a color filter. Depth performance can be increased with a higher sensitivity because we use white (W) pixels for OPA with red (R) and blue (B) pixels for imaging. The RB pixels produce a defocused image with blur, while W pixels produce a focused image. The focused image is used as a reference image to extract the depth information for 3D imaging. This image can be compared with the defocused image from RB pixels. Therefore, depth information can be extracted by comparing defocused image with focused image using the depth from defocus (DFD) method. Previously, we proposed the pixel aperture (PA) technique based on the depth from defocus (DFD). The OPA technique is expected to enable a higher depth resolution and range compared to the PA technique. The pixels with a right OPA and a left OPA are used to generate stereo image with a single chip. The pixel structure was designed and simulated. Optical performances of various offset pixel aperture structures were evaluated using optical simulation with finite-difference time-domain (FDTD) method.
A 3dimensional (3D) imaging is an important area which can be applied to face detection, gesture recognition, and 3D reconstruction. In this paper, extraction of depth information for 3D imaging using pixel aperture technique is presented. An active pixel sensor (APS) with in-pixel aperture has been developed for this purpose. In the conventional camera systems using a complementary metal-oxide-semiconductor (CMOS) image sensor, an aperture is located behind the camera lens. However, in our proposed camera system, the aperture implemented by metal layer of CMOS process is located on the White (W) pixel which means a pixel without any color filter on top of the pixel. 4 types of pixels including Red (R), Green (G), Blue (B), and White (W) pixels were used for pixel aperture technique. The RGB pixels produce a defocused image with blur, while W pixels produce a focused image. The focused image is used as a reference image to extract the depth information for 3D imaging. This image can be compared with the defocused image from RGB pixels. Therefore, depth information can be extracted by comparing defocused image with focused image using the depth from defocus (DFD) method. Size of the pixel for 4-tr APS is 2.8 μm × 2.8 μm and the pixel structure was designed and simulated based on 0.11 μm CMOS image sensor (CIS) process. Optical performances of the pixel aperture technique were evaluated using optical simulation with finite-difference time-domain (FDTD) method and electrical performances were evaluated using TCAD.