This will count as one of your downloads.
You will have access to both the presentation and article (if available).
In this paper, we present results achieved at ISL that demonstrate how single-photon imaging combined with computational methods differs from classical imaging methods. We show how we can extract and reconstruct new, previously unattainable information from scenes.
ISL has investigated passive single photon counting to reconstruct the photon flux imaging the sensor array. We could reconstruct image information and obtained up-scaling by application of convolutional neural networks, reduced noise and motion blur by computer vision algorithms. Finally, we extracted modulation frequencies by Fourier analysis and obtained event-based neuromorphic imaging.
Further, we have studied laser-based active imaging of single photons to measure the round-trip path length of light pulses for ranging and 3D imaging. We have analyzed multi-bounce photon path to estimate the size of cavities and to improve vision through scattering media such as dense fog. Finally, we investigated SPAD sensing for the reconstruction of objects outside the direct line of sight in non-line of sight (NLOS) sensing approaches.
The trial, which serves as the foundation for subsequent data analysis, encompassed a multitude of scenarios designed to challenge the limits of computational imaging technologies. The diverse set of targets, each with its unique set of challenges, allows for the examination of system performance across various environmental and operational conditions.
Computational single photon counting for non-line-of-sight, light in flight, and photon flux imaging
Since 1945, the LRSL, renamed ISL in 1959, maintained a leading position in the domain of high-speed phenomenon. In the beginning of the 60's, the invention of the laser was a true revolution that permits the emergence of new techniques like holography and interferometric holography. With the introduction of semiconductor lasers, ISL has a leading position on range-gated active imaging and deploys a significant research effort in a new emerging domain: computational imaging which includes scientific thematic such as see around the corner, compressed sensing or imaging with multiple scattered photons.
In this paper, we report on a new portable and range-gated night-vision goggle in the SWIR spectral region. This goggle will be a useful eye-safe device for surveillance and imaging under all weather conditions. At 1.5 μm, it is well known that human skin appears black making face recognition ineffective. For applications which need facial identification as a legal proof, we implemented a bi-wavelength laser where it is also possible to extract one pulse of light at a second wavelength, where the skin appears with the same reflectance as in the visible spectrum (1.06 μm). After a theoretical analysis, we will describe the goggle technology and show some lab and outdoor recordings.
In this paper, we expand our existing LiDAR-based approach for the tracking and detection of (low) flying small objects like commercial mini/micro UAVs. We show that UAVs can be detected by the proposed methods, as long as the movements of the UAVs correspond to the LiDAR sensor’s capabilities in scanning performance, range and resolution. The trajectory of the tracked object can further be analyzed to support the classification, meaning that UAVs and non- UAV objects can be distinguished by an identification of typical movement patterns. A stable tracking of the UAV is achieved by a precise prediction of its movement. In addition to this precise prediction of the target’s position, the object detection, tracking and classification have to be achieved in real-time.
For the algorithm development and a performance analysis, we analyzed LiDAR data that we acquired during a field trial. Several different mini/micro UAVs were observed by a system of four 360° LiDAR sensors mounted to a car. Using this specific sensor system, the results show that UAVs can be detected and tracked by the proposed methods, allowing a protection of the car against UAV threats within a radius of up to 35 m.
The system was configured for operation at a wavelength of 1550 nm and measurements were performed using a 26 meter long fog tunnel facility which was filled with obscurants of several different types and densities. The system was comprised of a custom-built scanning transceiver unit, fiber-coupled to a Peltier cooled InGaAs/InP single-photon avalanche diode (SPAD) detector. A picosecond pulsed laser was used to provide a fiber-coupled illumination wavelength of 1550 nm at an approximate average optical power level of just under 1.5 mW for all measurements.
Bespoke image processing algorithms were developed to reconstruct high resolution depth and intensity profiles of obscured targets in challenging environments with low visibilities. Such algorithms can allow for target reconstruction using low levels of optical power and shorter data acquisition times, thus enabling image acquisition in the sparse photon regime.
The scenario of interest concerns the protection of sensitive zones against the potential threat constituted by small drones. In the recent past, field trials were carried out to investigate the detection and tracking of multiple UAV flying at low altitude. Here, we present results which were achieved using a heterogeneous sensor network consisting of acoustic antennas, small FMCW RADAR systems and optical sensors. While acoustics and RADAR was applied to monitor a wide azimuthal area (360°), optical sensors were used for sequentially identification.
The localization results have been compared to the ground truth data to estimate the efficiency of each detection system. Seven-microphone acoustic arrays allow single source localization. The mean azimuth and elevation estimation error has been measured equal to 1.5 and -2.5 degrees respectively. The FMCW radar allows tracking of multiple UAVs by estimating their range, azimuth and motion speed. Both technologies can be linked to the electro-optical system for final identification of the detected object.
Rather than seeing through obstacles directly, non-line-of-sight imaging relies on analyzing indirect reflections of light that traveled around the obstacle. In previous work, transient imaging was established as the key mechanic to enable the extraction of useful information from such reflections.
So far, a number of different approaches based on transient imaging have been proposed, with back projection being the most prominent one. Different hardware setups were used for the acquisition of the required data, however all of them have severe drawbacks such as limited image quality, long capture time or very high prices. In this paper we propose the analysis of synthetic transient renderings to gain more insights into the transient light transport. With this simulated data, we are no longer bound to the imperfect data of real systems and gain more flexibility and control over the analysis.
In a second part, we use the insights of our analysis to formulate a novel reconstruction algorithm. It uses an adapted light simulation to formulate an inverse problem which is solved in an analysis-by-synthesis fashion. Through rigorous optimization of the reconstruction, it then becomes possible to track known objects outside the line of side in real time. Due to the forward formulation of the light transport, the algorithm is easily expandable to more general scenarios or different hardware setups. We therefore expect it to become a viable alternative to the classic back projection approach in the future.
Theoretical and experimental comparison of flash and accumulation mode in range-gated active imaging
This will count as one of your downloads.
You will have access to both the presentation and article (if available).
View contact details
No SPIE Account? Create one