Non-Line-of-Sight (NLOS) imaging uses fast illumination and detection
to reconstruct images of scenes from indirect illumination. Light
reflected off a relay surface is thereby used to view the scene. One
approach to compute these reconstructions from the captured data is to
transform them into line of sight wave propagation problems creating a
virtual wave front to model a virtual camera at the location of the
relay surface. Our NLOS imaging system samples the virtual wavefront
at the virtual aperture. As in a line of sight camera, scene
reconstruction for the virtual camera is achieved by propagating the
virtual wave back into the scene. In line of sight cameras, this
operation is often performed by a lens. For the virtual camera, we
implement it computationally. This approach allows us to transfer
methods for scene reconstruction, scene inference, and imaging from
existing line of sight imaging approaches to NLOS imaging.
In particular, we make use of fast wave propagation algorithms to
create high speed memory efficient NLOS imaging. This allows us to
reconstruct complex scenes in sub-second times for variable hardware
configurations. In particular, our reconstruction methods allow us to
use SPAD arrays in conjunction with laser scanning to improve capture
There is a large diversity of line of sight imaging approaches with
different properties that probe different aspects of the scene. In
principle, the Phasor Field Virtual Wave formalism allows us to turn
any of them into a NLOS virtual camera. I will cover several examples
of this process that yield different NLOS reconstructions, including
2D NLOS imaging, transient NLOS videos, and Visualization of higher
order light paths from 4th and 5th bounces in the hidden scene.
Single-photon sensor technology is rapidly emerging as the optical sensor technology of choice in specialized low flux imaging applications such as long-range LiDAR, fluorescence microscopy and non-line-of-sight imaging. We ask the question: Can single-photon sensors be used more broadly as general-purpose image sensors for passive 2D intensity imaging? We derive a photon flux estimator using the number of photons detected in a fixed exposure time by a dead-time-limited single-photon avalanche diode (SPAD) sensor. Unlike a conventional image sensor pixel that has a hard saturation limit due to its full well capacity, our SPAD-based passive imaging method has a non-linear response that never saturates. This enables SPADs to operate not only at extremely low photon flux levels but also at extremely high flux levels, several orders of magnitude higher than the saturation limit of conventional image sensors. We present a comprehensive theoretical analysis of the effect of various design parameters on the signal-to-noise-ratio and dynamic range of a passively operated SPAD pixel, and also demonstrate the dynamic range improvement experimentally.
The excited state lifetime of a fluorophore together with its fluorescence emission spectrum provide information that can yield valuable insights into the nature of a fluorophore and its microenvironment. However, it is difficult to obtain both channels of information in a conventional scheme as detectors are typically configured either for spectral or lifetime detection. We present a fiber-based method to obtain spectral information from a multiphoton fluorescence lifetime imaging (FLIM) system. This is made possible using the time delay introduced in the fluorescence emission path by a dispersive optical fiber coupled to a detector operating in time-correlated single-photon counting mode. This add-on spectral implementation requires only a few simple modifications to any existing FLIM system and is considerably more cost-efficient compared to currently available spectral detectors.
Standard imaging systems, such as cameras, radars and lidars, are becoming a big part of our everyday life when it comes to detection, tracking and recognition of targets that are in the direct line-of-sight (LOS) of the imaging system. Challenges however start to arise when the objects are not in the system’s LOS, typically when an occluder is obstructing the imager’s field of view. This is known as non-line-of-sight (NLOS) and it is approached in different ways depending on the imager’s operating wavelength. We consider an optical imaging system and the literature offers different approaches from a component and recovery algorithm point of view.
In our optical setup, we assume a system comprising an ultra-fast laser and a single photon avalanche diode (SPAD). The former is used to sequentially illuminate different points on a diffuser (relay) wall, causing the photons to uniformly scatter in all directions, including the target’s location. The latter component collects the scattered photons as a function of time. In post-processing, back-projection based algorithms are employed to recover the target’s image. Recent publications focused their attention on showing the quality of the results, as well as potential algorithm improvements. Here we show results based on a novel theoretical approach (coined as “phasor fields”), which suggests treating the NLOS imaging problem as a LOS one. The key feature is to consider the relay wall as a virtual sensor, created by the different points illuminated on the wall. Results show the superiority of this method compared to standard approaches.
The depth resolution achieved by a continuous wave time-of-flight (C-ToF) imaging system is determined by the coding (modulation and demodulation) functions that it uses. We present a mathematical framework for exploring and characterizing the space of C-ToF coding functions in a geometrically intuitive space. Using this framework, we design families of novel coding functions that are based on Hamiltonian cycles on hypercube graphs. The proposed Hamiltonian coding functions achieve up to an order of magnitude higher resolution as compared to the current state-of-the-art. Using simulations and a hardware prototype, we demonstrate the performance advantages of Hamiltonian coding in a wide range of imaging settings.
Ranging and imaging in low light level conditions are a key application of active imaging systems. Typically, intensified cameras (ICCD, EBCMOS) are used to sense the intensity of reflected laser light pulses used for illumination. Recent developments in single photon avalanche diodes (SPAD) show, that sensors having single photon counting capabilities are about to revolutionize low light level imaging and laser ranging. These sensors have the ability to count detection events caused by single photons with very high timing precision. By application of statistical measurement means, the sensitivity of such devices can be increased far beyond classical sensing devices and the needed photon flux has significant lower intensities. New SPAD devices enable the development of novel sensing methods and technologies, and open laser ranging and imaging to new fields of application. Here, we focus on novel hardware structures which are under development as well as the application of avalanche photo diode detectors for light-in-flight detection and non-line-of-sight imaging.
In an optical Line-of-Sight (LOS) scenario, such as one involving a LIDAR system, the goal is to recover an image of a target in the direct path of the transmitter and receiver. In Non-Line-of-Sight (NLOS) scenarios the target is hidden from both the transmitter and the receiver by an occluder, i.e. a wall. Recent advancements in technology, computer vision and inverse light transport theory have shown that it is possible to recover an image of a hidden target by exploiting the temporal information encoded in multiple-scattered photons. The core idea is to acquire data using an optical system, composed of an ultra-fast laser that emits short pulses (in the order of femtoseconds) and a camera capable of recovering the photons time-of-flight information (a typical resolution is in the order of picoseconds). We reconstruct 3D images from this data based on the backprojection algorithm, a method typically found in the computational tomography field, which is parallelizable and memory efficient, although it only provides an approximate solution. Here we present improved backprojection algorithms for applications to large scale scenes with with a large number of scatterers and meters to hundreds of meters diameter. We apply these methods to the NLOS imaging of rooms and lunar caves.
Light scattering is a primary obstacle to optical imaging in a variety of different environments and across many size and time scales. Scattering complicates imaging on large scales when imaging through the atmosphere when imaging from airborne or space borne platforms, through marine fog, or through fog and dust in vehicle navigation, for example in self driving cars. On smaller scales, scattering is the major obstacle when imaging through human tissue in biomedical applications. Despite the large variety of participating materials and size scales, light transport in all these environments is usually described with very similar scattering models that are defined by the same small set of parameters, including scattering and absorption length and phase function.
We attempt a study of scattering and methods of imaging through scattering across different scales and media, particularly with respect to the use of time of flight information. We can show that using time of flight, in addition to spatial information, provides distinct advantages in scattering environments. By performing a comparative study of scattering across scales and media, we are able to suggest scale models for scattering environments to aid lab research. We also can transfer knowledge and methodology between different fields.
Light scattering is a primary obstacle to imaging in many environments. On small scales in biomedical microscopy and diffuse tomography scenarios scattering is caused by tissue. On larger scales scattering from dust and fog provide challenges to vision systems for self driving cars and naval remote imaging systems. We are developing scale models for scattering environments and investigation methods for improved imaging particularly using time of flight transient information.
With the emergence of Single Photon Avalanche Diode detectors and fast semiconductor lasers, illumination and capture on picosecond timescales are becoming possible in inexpensive, compact, and robust devices. This opens up opportunities for new computational imaging techniques that make use of photon time of flight.
Time of flight or range information is used in remote imaging scenarios in gated viewing and in biomedical imaging in time resolved diffuse tomography. In addition spatial filtering is popular in biomedical scenarios with structured illumination and confocal microscopy. We are presenting a combination analytical, computational, and experimental models that allow us develop and test imaging methods across scattering scenarios and scales. This framework will be used for proof of concept experiments to evaluate new computational imaging methods.
The application of nonline-of-sight (NLoS) vision and seeing around a corner has been demonstrated in the recent past on a laboratory level with round trip path lengths on the scale of 1 m as well as 10 m. This method uses a computational imaging approach to analyze the scattered information of objects which are hidden from the sensor’s direct field of view. A detailed knowledge about the scattering surfaces is necessary for the analysis. The authors evaluate the realization of dual-mode concepts with the aim of collecting all necessary information to enable both the direct three-dimensional imaging of a scene as well as the indirect sensing on hidden objects. Two different sensing approaches, laser gated viewing (LGV) and time-correlated single-photon counting, are investigated operating at laser wavelengths of 532 and 1545 nm, respectively. While LGV sensors have high spatial resolution, their application for NLoS sensing suffers from a low temporal resolution, i.e., a minimal gate width of 2 ns. On the other hand, Geiger-mode single-photon counting devices have high temporal resolution (250 ps), but the array size is limited to some thousand sensor elements. The authors present detailed theoretical and experimental evaluations of both sensing approaches.
The application of non-line of sight vision and see around a corner has been demonstrated in the recent past on laboratory level with round trip path lengths on the scale of 1 m as well as 10 m. This method uses a computational imaging approach to analyze the scattered information of objects which are hidden from the direct sensors field of view. Recent demonstrator systems were driven at laser wavelengths (800 nm and 532 nm) which are far from the eye-safe shortwave infrared (SWIR) wavelength band i.e. between 1.4 μm and 2 μm. Therefore, the application in public or inhabited areas is difficult with respect to international laser safety conventions. In the present work, the authors evaluate the application of recent eye safe laser sources and sensor devices for non-line of sight sensing and give predictions on range and resolution. Further, the realization of a dual mode concept is studied enabling both, the direct view on a scene and the indirect view on a hidden scene. While recent laser gated viewing sensors have high spatial resolution, their application in non-line of sight imaging suffer from a too low temporal resolution due to minimal sensor gate width of around 150 ns. On the other hand, Geiger-mode single photon counting devices have high temporal resolution, but their spatial resolution is (until now) limited to array sizes of some thousand sensor elements. In this publication the authors present detailed theoretical and experimental evaluations.
The application of non-line-of-sight vision has been demonstrated in the recent past on laboratory level with round trip path lengths on the scale of 1 m as well as 10 m. This method uses a computational imaging approach to analyze the scattered information of objects which are hidden from the direct sensor field of view. In the present work, the authors evaluate the application of recent single photon counting devices for non-line-of-sight sensing and give predictions on range and resolution. Further, the realization of a concept is studied enabling the indirect view on a hidden scene. Different approaches based on ICCD and GM-APD or SPAD sensor technologies are reviewed. Recent laser gated viewing sensors have a minimal temporal resolution of around 2 ns due to sensor gate widths. Single photon counting devices have higher sensitivity and higher temporal resolution.
We discuss new approaches to analyze laser-gated viewing data for nonline-of-sight vision with a frame-to-frame back-projection as well as feature selection algorithms. Although first back-projection approaches use time transients for each pixel, our method has the ability to calculate the projection of imaging data on the voxel space for each frame. Further, different data analysis algorithms and their sequential application were studied with the aim of identifying and selecting signals from different target positions. A slight modification of commonly used filters leads to a powerful selection of local maximum values. It is demonstrated that the choice of the filter has an impact on the selectivity i.e., multiple target detection as well as on the localization precision.
In the present paper, we discuss new approaches to analyze laser gated viewing data for non-line-of-sight vision with a novel frame-to-frame back projection as well as feature selection algorithms. While first back projection approaches use time transients for each pixel, our new method has the ability to calculate the projection of imaging data on the obscured voxel space for each frame. Further, four different data analysis algorithms were studied with the aim to identify and select signals from different target positions. A slight modification of commonly used filters leads to powerful selection of local maximum values. It is demonstrated that the choice of the filter has impact on the selectivity i.e. multiple target detection as well as on the localization precision.
Endoscope cameras play an important and growing role as a diagnostic and surgical tool. The endoscope camera is
usually used to provide a view of the scene straight ahead of the instrument to the operator. As is common in many
remotely operated systems, the limited field of view and the inability to pan the camera make it challenging to gain a
situational awareness comparable to an operator with direct access to the scene. We present a spectral multiplexing
technique for endoscopes that allows for overlay of the existing forward view with additional views at different angles to
increase the effective field of view of the device. Our goal is to provide peripheral vision while minimally affecting the
design and forward image quality of existing systems.
Laser gated viewing is a prominent sensing technology for optical imaging in harsh environments and can be applied for vision through fog, smoke, and other degraded environmental conditions as well as for the vision through sea water in submarine operation. A direct imaging of nonscattered photons (or ballistic photons) is limited in range and performance by the free optical path length, i.e., the length in which a photon can propagate without interaction with scattering particles or object surfaces. The imaging and analysis of scattered photons can overcome these classical limitations and it is possible to realize a nonline-of-sight imaging. The spatial and temporal distributions of scattered photons can be analyzed by means of computational optics and their information of the scenario can be restored. In particular, the information outside the line of sight or outside the visibility range is of high interest. We demonstrate nonline-of-sight imaging with a laser gated viewing system and different illumination concepts (point and surface scattering sources).
Laser Gated Viewing is a prominent sensing technology for optical imaging in harsh environments and can be applied to the vision through fog, smoke and other degraded environmental conditions as well as to the vision through sea water in submarine operation. A direct imaging of non-scattered photons (or ballistic photons) is limited in range and performance by the free optical path length i.e. the length in which a photon can propagate without interaction with scattering particles or object surfaces. The imaging and analysis of scattered photons can overcome these classical limitations and it is possible to realize a non-line-of-sight imaging. The spatial and temporal distribution of scattered photons can be analyzed by means of computational optics and their information of the scenario can be restored. In the case of Lambertian scattering sources the scattered photons carry information of the complete environment. Especial the information outside the line of sight or outside the visibility range is of high interest. Here, we discuss approaches for non line of sight active imaging with different indirect and direct illumination concepts (point, surface and volume scattering sources).