PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
In this article we discuss the trade-offs for the design, fabrication and interfacing of fast pixel addressable (random-access) cameras. In order to benefit most from the random addressability, the interface must be optimized for access through a data bus/address bus structure. Measures to correct the camera's inherent non-uniformity must not slow down the interface speed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A set of test structures designed to characterize and compare the performance of CMOS passive and active pixel image sensors is presented. The test structures are deigned so that they can be rapidly ported from one process to another. They are also designed so that individual photodetectors and pixel circuits as well as entire image sensor arrays can be characterized and compared based on: quantum efficiency, spectral response, fixed pattern noise, sensitivity, blooming, input referred read noise, reduction of quantum efficiency caused by silicide/salicide, lag, digital switching noise sensitivity, impact ionization noise sensitivity, dynamic range, and temperature dependency of all measured parameters. Four test chips that include a variety of these structures have been built in two different 0.35 micrometer CMOS processes. The test chips include nineteen types of individual photodetectors and thirty eight types of 64 by 64 pixel arrays. The test methodology and preliminary test results from these chips are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An APS test circuit including three 32 by 32 arrays with photodiode and photogate pixels has been developed using a 1.2 micrometer double-layer polysilicon double-layer metal CMOS process. The first experimental results have been published in the Aerosense conference in Orlando (April 1996). In this paper we present the latest experimental results including radiation hardness, quantum efficiency and spot scan pixel sensitivity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The main goal of the ESPRIT project 'microintegrated intelligent optical sensor systems' (MInOSS) was to investigate a design methodology for optical sensor systems. The methodology was applied to the design of a library of modules and general building blocks in a standard CMOS technology aimed at easing the design of future optical sensors. A set of demonstrators was developed, including a linear array of sensors for spectrophotometry and a number of 2D sensor arrays for use in 'intelligent' digital cameras. The main results of the project to be reviewed in this paper include a library photodiode arrays and charge amplifiers; three-step flash and algorithmic analog-to- digital converters for on-chip conversion; the architectures of the linear and 2D intelligent sensors which were developed; and guidelines for the practical design of photosensors and pixel arrays in a mixed analogue/digital/optical environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
On standard CMOS processes, basically two photosensors may be designed: photodiodes or vertical bipolar phototransistors. A trade-off must be found between the area of the sensor, its sensitivity and its bandwidth. In most designs, the high sensitivity of the sensor is a key point and led to choosing a phototransistor based solution. However this choice is made at the expense of the bandwidth of the sensor. For small currents, an analysis shows that it is mainly proportional to the base-emitter capacitance Cbe and to the collector current. Hence, in the case of a floating base bipolar and for a given current, the only way of reducing Cbe is to decrease the emitter area. On the other hand, the sensitivity is to be preserved. We have proposed and tested an original sensor based on the splitting of phototransistors. The basic idea is to use minimum size emitter bipolar transistors and to increase their collector-base junction perimeter. Thanks to this design, for a given sensor area, the bandwidth has been improved by a factor of 3 and the sensitivity has been preserved. This solution has been successfully used on an operational retina performing stochastic computations at video rates. In particular, thanks to our design, we have been able to successfully implement a 150 by 50 micrometer2 optoelectronic random generator providing up to 100,000 random variables per second.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The pixel-level adaptive sensitivity technology enables image sensors to acquire wide dynamic range scenes without loss of detail, by adjusting the sensitivity of each individual pixel according to the intensity of light incident upon it. An adaptive sensitivity TDI (time delay and integrate) CCD sensor test circuit has been designed and fabricated. The sensor comprises 18 TDI integration stages, with a horizontal resolution of 32 pixels. The level of charge integrated in each pixel is monitored as the pixel charge packet progresses across the TDI array. If the charge accumulates to above a certain threshold level, the pixel is discharged. Such 'conditional reset' mechanisms are inserted after the thirteenth stage and again after the seventeenth stage. Thus, each individual pixel may be integrated over either 1, 5, or all 18 stages. Since in TDI scanning, as in all linear imaging situations, there is no concept of 'frames' and each pixel is imaged only once, the intensity sensing and the decision on how long to integrate must be performed 'on the fly.' But, while in regular linear sensors the perpendicular fill factor is unlimited and complex control circuits may be placed next to the detectors, the two dimensional nature of TDI sensors presents much more demanding architectural and circuit challenges.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present different compact analog VLSI motion sensors that compute the 1-D velocity of optical stimuli over a large range and are suitable for integration in focal plane arrays. They have been extensively tested and optimized for robust performance under varying light conditions. Since their output signals are only weakly dependent on contrast, they directly extract optical flow data from an image. Focal plane arrays of such sensors are particularly interesting for application in single-chip systems that perform navigation tasks for moving robots or vehicles, where light weight, low power consumption, and real-time processing are crucial. Several monolithic motion-processing systems based on such velocity sensors have been built and tested. We describe here three chips, designed for the determination of the focus of expansion, the estimation of the time to contact, and the detection of motion discontinuities respectively. The first two systems have been specifically designed for vehicle navigation tasks. The choice of this application domain allows us to make a priori assumptions about the optical flow field that simplifies the structure of the systems and improves their overall performance. The motion-discontinuity-detection system can be more generally used to segment images based on the velocities of its different domains with respect to the camera. It is particularly useful for background-foreground segregation in the case of ego-motion of an autonomous system in a static environment. Tests results of the three systems are presented and their performance is evaluated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The design of a CMOS focal plane array with 128 by 128 pixels and analog neural preprocessing is presented. Optical input to the array is provided by substrate-well photodiodes. A two-dimensional neural grid wIth next- neighbor connectivity, implemented as differential current- mode circuit, is capable of spatial low-pass filtering combined with contrast enhancement or binarization. The gain, spatial filter and nonlinearity parameters of the neural network are controlled externally using analog currents. This allows the multipliers and sigmoid transducers to be operated in weak inversion for a wide parameter sweep range as well as in moderate or strong inversion for a larger signal to pattern-noise ratio. The cell outputs are sequentially read out by an offset compensated differential switched-capacitor multiplexer with column preamplifiers. The analog output buffer is designed for pixel rates up to 1 pixel/microsecond and 2 by 100 pF load capacitance. All digital clocks controlling the analog data path are generated on-chip. The clock timing is programmable via a serial computer interface. Using 1 micrometer double-poly double-metal CMOS process, one pixel cell occupies 96 by 96 micrometer2 and the total chip size is about 2.3 cm2. Operating the neural network in weak inversion, the power dissipation of the analog circuitry is less than 100 mW.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper presents a low cost, miniature sensor that is able to compute in real time (up to 1000 frames/sec) motion parameters like the degree of translation, expansion or rotation that is present in the observed scene, as well as the so-called time-to-crash (TTC), that is the time required for a moving object to collide with the sensor. The sensing principle is that of computing and analyzing the optical flow projected by the scene on the sensor focal plane, through a novel algorithmic technique, based on sparse sampling of the image and one-dimensional correlation. The hardware implementation of the algorithm is based on two custom VLSI chips: one is a CMOS image sensor, having nonstandard pixel geometry, while the other one is a digital correlator that computes at high speed the optical flow vectors. The high-level control and communication tasks are managed by a microcontroller, thus guaranteeing a high level of flexibility and adaptability of the sensor properties towards different application requirements and/or variable external conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To navigate in an unknown environment, many natural species appear to rely primarily on motion information. Accordingly, the visual system of insects, arguably one of the simplest, is geared towards the detection of motion, which is implemented in mechanisms present at an early stage of visual processing. The interpretation of motion information may then produce percepts which are useful to navigation, such as the warning of an impending collision and the estimation of the distance to potential obstacles through egomotion. The original objective of the badge project was to demonstrate the feasibility of copying motion perception mechanisms onto a mixed mode analog/digital VLSI device. The initial success of the project led to other devices being designed with a view to improving the performance and stability of the analog circuitry. In retrospect, the original concepts appear to remain the most robust, which suggests that the most effective approach would consist of focusing on the simple but fundamental characteristics, and is supported by electrophysiological evidence.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new image sensor, using CMOS technology, has been designed and fabricated. The pixel distribution of this sensor follows a log-polar mapping, thus the pixel concentration is maximum at the center reducing the number of pixels towards the periphery, having a resolution of 56 rings with 128 pixels per ring. The design of this kind of sensors has special issues regarding the space-variant nature of the pixel distribution. The main topic is the different pixel size that requires scaling mechanisms to achieve the same output independently of the pixel size. This paper presents some study results on the scaling mechanisms of this kind of sensors. A mechanism for current scaling is presented. This mechanism has been studied along with the logarithmic response of these special kind of sensing cells. The chip has been fabricated using standard 0.7 micrometer CMOS technology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The authors present a novel technique for color detection by using a buried double pn junction (B.D.J.) and a buried triple pn junction (B.T.J.) structure. For the B.D.J. wavelength-dependent photocurrents I1 and I2 can be measured. The wavelength of monochromatic incident light can be identified from the ratio I2/I1. In the case of the B.T.J. with wavelength dependent photocurrents the three colorimetric components of the incident light can be extracted. These structures can be implemented in standard CMOS and BiCMOS technology respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A system of adaptive photoreceptors has been designed and built in a standard CMOS process. The mechanism of adaptation is based on an analog feedback circuit modelled after the biological example. The system exhibits a large dynamic range of approximately 7 orders of magnitude in light intensity and a pronounced capability to detect moving objects. Simulations and measurements with single adaptive receptors as well as first experiences with a camera system are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Biological information-processing systems are able to solve difficult problems in sensory perception. Retina, the more studied structure in vision is our main biological reference. A retina model including the main synaptic interactions in the outer plexiform layer (OPL) of the vertebrate retina, is briefly presented. Emphasis is on the coupling between photoreceptors (cones) which perform a spatiotemporal regularization that increases the signal-to- noise ratio of the output signal. We have built and tested a set of VLSI retina. Focus is on circuits, the design methodology, and the limitation constraints in our fabrication technology. Some experimental results from the fabricated chips are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a motion adaptive sensor for image enhancement and wide dynamic range, which has computational elements based on a column-parallel architecture. The proposed sensor not only operates at high frequency but also detects motion and saturation of the stored charge on photo diode independently pixel by pixel. The motion adaptive sensor is able to control the suitable storage time in each pixel which results in no motion-blur and no saturation so that it is expected to have high temporal resolution in the moving area, high SNR in the static area and also an effect resulting in wide dynamic range. In this paper, we discuss the principle, design of motion adaptive sensor and some results of simulations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Implementing motion detection algorithms using analog VLSI techniques has proven to be a challenging task due to several obstacles, including the limitations of analog VLSI, the algorithmic limitations brought forward by complex motion detection schemes, and the effect of various types of noise. Insect vision has been an inspiring model for motion detectors, as insects heavily rely on motion detection for navigation, and the natural complexity of their neuro-visual circuitry is also less than that of vertebrates. In an effort to implement the so called template model of insect vision, a comparative study of various analog differentiators was undertaken by implementing different candidates on a test chip. Based on the results, a 64 by 4 motion detector has been designed and fabricated. The chip is designed in a 0.8 micrometer 3M-1P CMOS process, and the 2-D array occupies an area of 1.5 multiplied by 3.1 mm2. Each cell comprises a bipolar-mode photodetector, an adaptive amplifier, the improved analog differentiator, and thresholding circuits.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a novel integration of compression and sensing in order to enhance performance of the image sensor. By integrating compression function on the sensor plane, the image signal that has to be readout from the sensor is significantly reduced. Thus, the integration can consequently increase the pixel rate of the sensor. The compression scheme we make use of is conditional replenishment that detects and encodes moving areas. In this paper, we discuss design and implementation of two architectures for on sensor compression. One is pixel parallel approach and the other is column parallel approach. We describe and compare both approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the development of a compact digital CCD camera which contains image digitization and processing which interfaces to a personal computer (PC) via a standard enhanced parallel port. Digitizing of precise pixel samples coupled with the provision of putting a single chip FPGA for data processing, became the main digital components of the camera prior to sending the data to the PC. A form of compression scheme is applied so that the digital images may be transferred within the existing parallel port bandwidth. The data is decompressed in the PC environment for a real- time display of the video images using purely native processor resources. Frame capture is built into the camera so that a full uncompressed digital image could be sent for special processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a system for detecting moving elements from a moving camera using a simplified background constraint technique. The hardware implementation of the method is done in an intelligent camera form. In order to keep the design small and able to operate at video frame rate the algorithm is for a large part implemented in FPGAs. The camera moves in a locally planar environment and is tilted enough to exclude the horizon line from the image. In this paper we assume the camera moves only in translation and the objects to detect are moving in the same direction as the camera does (and obviously at higher speed). If the instantaneous speed of the camera is known and is constant between two successively processed images, it is easy to predict (for a big part) the next image to process by using the reverse perspective transform. In order to transform such a prediction in a simple translation of the image and in order to obtain a uniform spatial resolution in the observed scene, we resample the lines and apply to each resampled line a specific horizontal scale factor. With each transformed image we 'compute' a predicted image. The predicted and true images are subtracted and objects moving in the scene correspond to a noticeable difference area. The direction and speed of the displacement are estimated with one-dimensional correlation functions between vertical windows in subtracted images. A study of the limitations and experimental results obtained by simulation with real images are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The smart camera is an intelligent compact camera system based on concepts which are taken from the human visual and nervous systems. Images are taken by a smart sensor which performs first low level image processing steps. Preprocessed images are then handed over to a digital signal processor (DSP). The DSP is responsible of high level image processing. It further reduces the image data down to essential information. Based on this data, the camera can be programmed to take decisions autonomously, e.g. halt an assembly line or remove an object from the line. Because of its visual capabilities and the built-in intelligence, the camera is independent of supervising systems, although it might update or alarm the supervising system if necessary.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A modular image capture system with close integration to CCD cameras has been developed. The aim is to produce a system capable of integrating CCD sensor, image capture and image processing into a single compact unit. This close integration provides a direct mapping between CCD pixels and digital image pixels. The system has been interfaced to a digital signal processor board for the development and control of image processing tasks. These have included characterization and enhancement of noisy images from an intensified camera and measurement to subpixel resolutions. A highly compact form of the image capture system is in an advanced stage of development. This consists of a single FPGA device and a single VRAM providing a two chip image capturing system capable of being integrated into a CCD camera. A miniature compact PC has been developed using a novel modular interconnection technique, providing a processing unit in a three dimensional format highly suited to integration into a CCD camera unit. Work is under way to interface the compact capture system to the PC using this interconnection technique, combining CCD sensor, image capture and image processing into a single compact unit.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Electronic still picture camera systems, mostly using CCD sensors, are ever more widely used both in the photographic world and in professional measuring technique. It is therefore necessary to develop adequate measuring facilities for the characterization and quality control of high-end measuring or imaging systems. In this presentation, a versatile measuring set-up for electronic still picture cameras is described. Several standards are being developed for the characterization of electronic still picture cameras that, among other things, define the methods for measuring the spatial frequency response (SFR) using a test chart and the modulation transfer function (MTF). Exemplary results are shown for the measurement of SFR and MTF of black-and- white and color CCD still cameras and the two properties are compared to one another.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Due to their local connectivity and wide functional capabilities, cellular nonlinear networks (CNN) are excellent candidates for the implementation of image processing algorithms using VLSI analog parallel arrays. However, the design of general purpose, programmable CNN chips with dimensions required for practical applications raises many challenging problems to analog designers. This is basically due to the fact that large silicon area means large development cost, large spatial deviations of design parameters and low production yield. CNN designers must face different issues to keep reasonable enough accuracy level and production yield together with reasonably low development cost in their design of large CNN chips. This paper outlines some of these major issues and their solutions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Near-sensor image processing, NSIP, is a concept where the temporal behavior of the photodiode is used to perform image processing. It has been shown that many conventional image processing operations like convolution and gray scale morphology can easily be implemented in NSIP. In this paper we describe the basis of NSIP and how the sensor/processor architecture is used to perform local as well as global operation. An implementation of an NSIP chip also is described. Finally, we show a number of algorithms and applications which have been implemented in our NSIP camera system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a comparative analysis of different analog-to-digital conversion architectures optimized for operation in close coupling with optical sensor arrays in the presence of stringent design constraints such as signal and noise levels, conversion rates and physical size of the array. Architectures based on a single converter per array and on multiple converters per array are considered. Measurement results on dedicated converters integrated in experimental chips together with optical arrays have proved the validity of the architectures presented, with different trade-off points in term of power consumption, conversion rate and spatial uniformity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Collecting global information from a large number of computing elements is a challenge in computing structures encountered within monochip parallel architectures. In this paper we point out that examining the current drawn from the power supply can provide valuable information in such structures. The examples shown here in the area of artificial retinas aim at extracting results from pre- processed images as well as the histogram of the actual image while it is acquired.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, an analog retina based stereo vision system for car navigation is presented. This stereo vision system uses two charge-mode spatio-frequential visual feature extraction retina to obtain matching primitives and a digital processor to do stereo matching and disparity computation. By using the quarter-cycle rule, all the computations are local, either implemented in parallel on feature extraction retina or on digital processor. A prototype system has shown a potential performance of 400 images per second at 256 by 256 resolution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The present level of VLSI technology allows the design of reasonably large FPAs in which each photodetector is associated with a processing structure. Furthermore this structure might be programmable, turning the FPA into an image processing architecture with parallel optical input. With the advent of such devices, dramatic on sensor information concentration may be obtained as well as ultrafast reactions, thanks to the rapprochement between phototransduction and processing operators. How to take full advantage of these remarkable abilities within vision systems raise architectural problems which are addressed here both conceptually and experimentally.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a special-purpose VLSI architecture for dominant point extraction along 2-D contours. It is designed to be integrated as part of a machine vision system with real-time edge-extraction and edge-tracking capabilities in order to allow the creation of a high-level database representation of the observed scene. Such dominant points carry useful information for shape analysis and pattern recognition applications since they represent a local shape property and segment object contours into piecewise linear segments and circular arcs. The proposed architecture implements an algorithm based on the curvature primal sketch. It consists of a set of 1-D systolic FIR filters performing a multiresolution analysis of the scene's object contours, a set of finite state machines extracting zero- crossings and extrema of the filtered data, and a set of scale-space integration cells combining the accurate locations provided by the finest filters with the noise rejection properties of the coarsest ones in order to reliably extract relevant dominant points with accurate localization. The overall architecture has been successfully simulated using real edge images. Some of these results are presented and discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The main objective of IBIDEM is to develop a video phone useful for speech reading by hearing-impaired people based on a new generation of the space-variant sensor and using standard telephone lines. The space variant nature of the sensor will allow it to have high resolution in the area of interest, lips or fingers, while still maintaining a wide field of view in order to perceive, for example, the facial expression of the interlocutor, but reducing drastically the amount of data to be sent on the line. A second objective if IBIDEM is the use of the same equipment for remote monitoring of health status. The system can be used for obtaining information about the status of a client in the form of images and could be extended to include various physiological parameters like heart rate, blood pressure, etc. The IBIDEM project is constructing a video phone using a camera with the retina-like sensor, a motorized system for moving the point of view of the camera as well as a display for the transmitted images. This video phone will be a high- quality, low-cost aid for the hearing impaired as well as being useful for remote monitoring.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a new method to improve the design of electro- optical imaging system using an end-to-end model of the imaging system and a combination of image quality criteria. Firstly, we used an imaging system simulator to produce an output image, which is the distorted version of the input scene. Secondly, we calculate an objective quotation for the quality of the imaging system for each parameter set, and compare the results with the human evaluation of image quality. It allows us to calibrate our imaging system quality measure (ISQM). Finally, the ISQM is used as a tool to improve system design without any human observer evaluation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The MARC project (methodology of art reproduction in color), funded by the EC and involving eight European partners, had the aim to improve the quality of reproduction in art catalogues and books. Those who are familiar with the nature of the problem will be pleased to hear this, because the color of reproductions often bears little resemblance to that of the original paintings. For the general public, as well as for the scholar, this has been an annoying, but nevertheless unavoidable characteristic of conventional photographic and printing techniques. With the advent of digital technology, came about the promise of revolutionary innovations which we regarded as a great challenge. Besides the numerous improvements and simplifications offered by digital techniques, we hope to be able to make use of the MARC digital image acquisition technology in order to replace the repeated photographing of paintings, thereby significantly reducing the burden placed upon them. Besides this, the digital image will be useful for many other purposes. As no camera was available with the resolution equivalent to a 24 cm by 18 cm Ektachrome film, the format usually used for high quality reproduction of paintings, one of the tasks of the project was to develop an electronic camera with 20,000 by 20,000 pixels and a color depth of 3 by 12 bits. In lack of charge coupled device (CCD) sensors which come even close to such a high resolution, a novel imaging technology combining micro- and macro-scanning with a CCD area sensor was developed. To avoid revolving color filters, this sensor is equipped with a color mosaic mask with filter characteristics closely matched to a linear combination of the XYZ spectral response curves defined by CIE 1931. The features of this camera are covered in great detail. The second step was to characterize the colorimetric response of the printing press. As the final result, a book 'Flamish Baroque Painting, Masterpieces of the Alte Pinakothek Munchen', with over 50 paintings and digitally magnified details, was printed and will be shown at the conference.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The introduction of intelligence within focal plane arrays (FPA) leads to sensory devices, called artificial retinas, which no longer output images but rather lists (or other data structures) of active pixels or points of interest. This results from on-sensor image processing, which may go up to some structural pattern recognition. Then new efficiency communication operators and techniques are needed. A novel approach is proposed that allows to encode pixel addresses in the pixel neighborhood using a single bit of information per pixel. Addresses are hidden in a 2-D two-valued lattice which features remarkable mathematical properties. By embedding such a structure within an artificial retina, it becomes possible to locate active pixels by asking a few global questions to the whole array and collecting the answers using a global OR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.