In this work, we investigate biometrics applied on 2D faces in order to secure areas requiring high security level. Based on emerging deep learning methods (more precisely transfer learning) as well as two classical machine learning techniques (Support Vector Machines and Random Forest), different approaches have been used to perform person authentication. Preprocessing filtering steps of input images have been included before features extraction and selection. The goal has been to compare those in terms of processing time, storage size and authentication accuracy according to the number of input images (for the learning task) and preprocessing tasks. We focus on data-related aspects to store biometric information on a low storage capacity remote card (10Ko), not only in a high security context but also in terms of privacy control. The proposed solutions guarantee users the control of their own biometrics data. The study highlights the impact of preprocessing to perform real-time computation, preserving a relevant accuracy while reducing the amount of biometric data. Considering application constraints, this study concludes with a discussion dealing with the tradeoff of the available resources against the required performances to determine the most appropriate method.
Despite the evolution of technologies, high-quality image acquisition systems design is a complex challenge. Indeed, during the image acquisition process, the recorded image does not fully represent the real visual scene. The recorded information could be partial due to dynamic range limitation and degraded due to distorsions of the acquisition system. Typically, these issues have several origins such as lens blur, or limited resolution of the image sensor. In this paper, we propose a full image enhancement system that includes lens blur correction based on a non-blind deconvolution followed by a spatial resolution enhancement based on a Super-Resolution technique. The lens correction has been software designed whereas the Super-Resolution has been both software and hardware (on an FPGA) implemented. The two processing steps have been validated using well-known image quality metrics, highlighting improvements of the quality of the resulting images.
Proc. SPIE. 9897, Real-Time Image and Video Processing 2016
KEYWORDS: Signal to noise ratio, Digital signal processing, Modulation, Sensors, Image processing, Digital filtering, Photodiodes, Medical imaging, Image sensors, Transistors, CMOS technology, Analog electronics, Semiconducting wafers, Digital electronics, Double positive medium
This paper presents a digital pixel sensor (DPS) integrating a sigma-delta analog-to-digital converter (ADC) at pixel level. The digital pixel includes a photodiode, a delta-sigma modulation and a digital decimation filter. It features adaptive dynamic range and multiple resolutions (up to 10-bit) with a high linearity. A specific row decoder and column decoder are also designed to permit to read a specific pixel chosen in the matrix and its neighborhood of 4 x 4. Finally, a complete design with the CMOS 130 nm 3D-IC FaStack Tezzaron technology is also described, revealing a high fill-factor of about 80%.
Proc. SPIE. 9897, Real-Time Image and Video Processing 2016
KEYWORDS: Human-machine interfaces, High dynamic range image sensors, Detection and tracking algorithms, Cameras, Sensors, Image processing, Video, Image resolution, Digital cameras, Digital cameras, Field programmable gate arrays, Image sensors, Range imaging, High dynamic range imaging, High dynamic range imaging, Raster graphics
High dynamic range (HDR) imaging generation from a set of low dynamic range images taken in different exposure times is a low cost and an easy technique. This technique provides a good result for static scenes. Temporal exposure bracketing cannot be applied directly for dynamic scenes, since camera or object motion in bracketed exposures creates ghosts in the resulting HDR image. In this paper we describe a real-time ghost removing hardware implementation on high dynamic range video ow added for our HDR FPGA based smart camera which is able to provide full resolution (1280 x 1024) HDR video stream at 60 fps. We present experimental results to show the efficiency of our implemented method in ghost removing.
Standard cameras capture only a fraction of the information that is visible to the human visual system. This is specifically true for natural scenes including areas of low and high illumination due to transitions between sunlit and shaded areas. When capturing such a scene, many cameras are unable to store the full Dynamic Range (DR) resulting in low quality video where details are concealed in shadows or washed out by sunlight. The imaging technique that can overcome this problem is called HDR (High Dynamic Range) imaging. This paper describes a complete smart camera built around a standard off-the-shelf LDR (Low Dynamic Range) sensor and a Virtex-6 FPGA board. This smart camera called HDR-ARtiSt (High Dynamic Range Adaptive Real-time Smart camera) is able to produce a real-time HDR live video color stream by recording and combining multiple acquisitions of the same scene while varying the exposure time. This technique appears as one of the most appropriate and cheapest solution to enhance the dynamic range of real-life environments. HDR-ARtiSt embeds real-time multiple captures, HDR processing, data display and transfer of a HDR color video for a full sensor resolution (1280 1024 pixels) at 60 frames per second. The main contributions of this work are: (1) Multiple Exposure Control (MEC) dedicated to the smart image capture with alternating three exposure times that are dynamically evaluated from frame to frame, (2) Multi-streaming Memory Management Unit (MMMU) dedicated to the memory read/write operations of the three parallel video streams, corresponding to the different exposure times, (3) HRD creating by combining the video streams using a specific hardware version of the Devebecs technique, and (4) Global Tone Mapping (GTM) of the HDR scene for display on a standard LCD monitor.
In many applications such as video surveillance or defect detection, the perception of information related to a scene is limited in areas with strong contrasts. The high dynamic range (HDR) capture technique can deal with these limitations. The proposed method has the advantage of automatically selecting multiple exposure times to make outputs more visible than fixed exposure ones. A real-time hardware implementation of the HDR technique that shows more details both in dark and bright areas of a scene is an important line of research. For this purpose, we built a dedicated smart camera that performs both capturing and HDR video processing from three exposures. What is new in our work is shown through the following points: HDR video capture through multiple exposure control, HDR memory management, HDR frame generation, and representation under a hardware context. Our camera achieves a real-time HDR video output at 60 fps at 1.3 megapixels and demonstrates the efficiency of our technique through an experimental result. Applications of this HDR smart camera include the movie industry, the mass-consumer market, military, automotive industry, and surveillance.
A high speed Analog VLSI Image acquisition and
pre-processing system is described in this paper. A 64×64 pixel retina is used to extract the magnitude and direction of spatial gradients from images. So, the sensor implements some lowlevel
image processing in a massively parallel strategy in each pixel of the sensor. Spatial gradients, various convolutions as Sobel filter or Laplacian are described and implemented on the circuit. The retina implements in a massively parallel way, at
pixel level, some various treatments based on a four-quadrants multipliers architecture. Each pixel includes a photodiode, an amplifier, two storage capacitors and an analog arithmetic unit.
A maximal output frame rate of about 10 000 frames per second with only image acquisition and 2000 to 5000 frames per second with image processing is achieved in a 0.35 μm standard
CMOS process. The retina provides address-event coded output on three asynchronous buses, one output is dedicated to the gradient and both other to the pixel values. A prototype based
on this principle, has been designed. Simulation results from Mentor GraphicsTMsoftware and AustriaMicrosystem Design kit are presented.
Generally, medical Gamma Camera are based on the Anger principle. These cameras use a scintillator block coupled to a bulky array of photomultiplier tube (PMT). To simplify this, we designed a new integrated CMOS image sensor in order to replace bulky PMT photodetetors. We studied several photodiodes sensors including current mirror amplifiers. These photodiodes have been fabricated using a CMOS 0.6 micrometers process from Austria Mikro Systeme (AMS). Each sensor pixel in the array occupies respectively, 1mm x 1mm area, 0.5mm x 0.5mm area and 0.2mm 0.2mm area with fill factor 98 % and total chip area is 2 square millimeters. The sensor pixels show a logarithmic response in illumination and are capable of detecting very low green light emitting diode (less than 0.5 lux) . These results allow to use our sensor in new Gamma Camera solid-state concept.
This paper describes the main principles of a vision sensor dedicated to the detecting and tracking faces in video sequences. For this purpose, a current mode CMOS active sensor has been designed using an array of pixels that are amplified by using current mirrors of column amplifier. This circuit is simulated using Mentor Graphics software with parameters of a 0.6 μm CMOS process. The circuit design is added with a sequential control unit which purpose is to realise capture of subwindows at any location and any size in the whole image.