Standard imaging systems, such as cameras, radars and lidars, are becoming a big part of our everyday life when it comes to detection, tracking and recognition of targets that are in the direct line-of-sight (LOS) of the imaging system. Challenges however start to arise when the objects are not in the system’s LOS, typically when an occluder is obstructing the imager’s field of view. This is known as non-line-of-sight (NLOS) and it is approached in different ways depending on the imager’s operating wavelength. We consider an optical imaging system and the literature offers different approaches from a component and recovery algorithm point of view.
In our optical setup, we assume a system comprising an ultra-fast laser and a single photon avalanche diode (SPAD). The former is used to sequentially illuminate different points on a diffuser (relay) wall, causing the photons to uniformly scatter in all directions, including the target’s location. The latter component collects the scattered photons as a function of time. In post-processing, back-projection based algorithms are employed to recover the target’s image. Recent publications focused their attention on showing the quality of the results, as well as potential algorithm improvements. Here we show results based on a novel theoretical approach (coined as “phasor fields”), which suggests treating the NLOS imaging problem as a LOS one. The key feature is to consider the relay wall as a virtual sensor, created by the different points illuminated on the wall. Results show the superiority of this method compared to standard approaches.
In an optical Line-of-Sight (LOS) scenario, such as one involving a LIDAR system, the goal is to recover an image of a target in the direct path of the transmitter and receiver. In Non-Line-of-Sight (NLOS) scenarios the target is hidden from both the transmitter and the receiver by an occluder, i.e. a wall. Recent advancements in technology, computer vision and inverse light transport theory have shown that it is possible to recover an image of a hidden target by exploiting the temporal information encoded in multiple-scattered photons. The core idea is to acquire data using an optical system, composed of an ultra-fast laser that emits short pulses (in the order of femtoseconds) and a camera capable of recovering the photons time-of-flight information (a typical resolution is in the order of picoseconds). We reconstruct 3D images from this data based on the backprojection algorithm, a method typically found in the computational tomography field, which is parallelizable and memory efficient, although it only provides an approximate solution. Here we present improved backprojection algorithms for applications to large scale scenes with with a large number of scatterers and meters to hundreds of meters diameter. We apply these methods to the NLOS imaging of rooms and lunar caves.