KEYWORDS: Optical filters, Sensors, Image sensors, Digital filtering, RGB color model, Image filtering, Diodes, Signal to noise ratio, Reconstruction algorithms, Modulation transfer functions
We propose a modification to the standard Bayer color filter array (CFA) and photodiode structure for CMOS image sensors, which we call 2PFCTM (two pixels, full color). The blue and red filters of the Bayer pattern are replaced by a magenta filter. Under each magenta filter are two stacked, pinned photodiodes; the diode nearest the surface absorbs mostly blue light, and the deeper diode absorbs mostly red light. The magenta filter absorbs green light, improving color separation between the blue and red diodes. We first present a frequency-based demosaicing method, which takes advantage of the new 2PFC geometry. Due to the spatial arrangement of red, green, and blue pixels, luminance and chrominance are very well separated in the Fourier space, allowing for computationally inexpensive linear filtering. In comparison with state-of-the-art demosaicing methods for the Bayer CFA, we show that our sensor and demosaicing method outperform the others in terms of color aliasing, peak signal to noise ratio, and zipper effect. As demosaicing alone does not determine image quality, we also analyze the whole system performance in terms of resolution and noise.
A modification to the standard Bayer CFA and photodiode structure for CMOS image sensors is proposed, which we call
2PFCTM, meaning "Two Pixel, Full Color". The blue and red filters of the Bayer pattern are replaced by magenta filters.
Under each magenta filter are two stacked, pinned photodiodes; the diode nearest the surface absorbs mostly blue light
and the deeper diode absorbs mostly red light. The magenta filter absorbs green light, improving color separation
between the resulting blue and red diodes. The dopant implant defining the bottom of the red-absorbing region can be
made the same as the green diodes, simplifying the fabrication. Since the spatial resolution for the red, green, and blue
channels are identical, color aliasing is greatly reduced. Luminance resolution can also be improved, the thinner diodes
lead to higher well capacity with resulting better dynamic range, and fabrication costs can be similar to or less than
standard Bayer CMOS imagers. Also, the geometry of the layout lends itself naturally to frequency-based demosaicing.
KEYWORDS: High dynamic range imaging, Digital cameras, Image processing, Cameras, Process modeling, Sensors, Gaussian filters, RGB color model, Data conversion, Image compression
We propose a complete digital camera workflow to capture and render
high dynamic range (HDR) static scenes, from RAW sensor data to an
output-referred encoded image. In traditional digital camera
processing, demosaicing is one of the first operations done after
scene analysis. It is followed by rendering operations, such as
color correction and tone mapping. In our workflow, which is based
on a model of retinal processing, most of the rendering steps are
performed before demosaicing. This reduces the complexity of the
computation, as only one third of the pixels are processed. This is
especially important as our tone mapping operator applies local and
global tone corrections, which is usually needed to well render high
dynamic scenes. Our algorithms efficiently process HDR images with
different keys and different content.
From image retrieval to image classification, all research shares one common requirement: a good image database to test or train the algorithms. In order to create a large database of images, we set up a project that allowed gathering a collection of more than 33000 photographs with keywords and tags from all over the world. This project was part of the "We Are All Photographers Now!" exhibition at the Musee de l'Elysee in Lausanne, Switzerland. The "Flux," as it was called, gave all photographers, professional or amateur, the opportunity to have their images shown in the museum. Anyone could upload pictures on a website. We required that some simple tags were filled in. Keywords were optional. The information was collected in a MySQL database along with the original photos. The pictures were projected at the museum in five second intervals. A webcam snapshot was taken and sent back to the photographers via email to show how and when their image was displayed at the museum.
During the 14 weeks of the exhibition, we collected more than 33000 JPEG pictures with tags and keywords. These pictures come from 133 countries and were taken by 9042 different photographers. This database can be used for non-commercial research at EPFL. We present some preliminary analysis here.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.