Non-line-of-sight imaging is a fascinating emerging area of research and expected to have an impact in numerous application fields including civilian and military sensing. Performance of human perception and situational awareness can be extended by the sensing of shapes and movement around a corner in future scenarios.
Rather than seeing through obstacles directly, non-line-of-sight imaging relies on analyzing indirect reflections of light that traveled around the obstacle. In previous work, transient imaging was established as the key mechanic to enable the extraction of useful information from such reflections.
So far, a number of different approaches based on transient imaging have been proposed, with back projection being the most prominent one. Different hardware setups were used for the acquisition of the required data, however all of them have severe drawbacks such as limited image quality, long capture time or very high prices. In this paper we propose the analysis of synthetic transient renderings to gain more insights into the transient light transport. With this simulated data, we are no longer bound to the imperfect data of real systems and gain more flexibility and control over the analysis.
In a second part, we use the insights of our analysis to formulate a novel reconstruction algorithm. It uses an adapted light simulation to formulate an inverse problem which is solved in an analysis-by-synthesis fashion. Through rigorous optimization of the reconstruction, it then becomes possible to track known objects outside the line of side in real time. Due to the forward formulation of the light transport, the algorithm is easily expandable to more general scenarios or different hardware setups. We therefore expect it to become a viable alternative to the classic back projection approach in the future.
The application of non-line-of-sight vision has been demonstrated in the recent past on laboratory level with round trip path lengths on the scale of 1 m as well as 10 m. This method uses a computational imaging approach to analyze the scattered information of objects which are hidden from the direct sensor field of view. In the present work, the authors evaluate the application of recent single photon counting devices for non-line-of-sight sensing and give predictions on range and resolution. Further, the realization of a concept is studied enabling the indirect view on a hidden scene. Different approaches based on ICCD and GM-APD or SPAD sensor technologies are reviewed. Recent laser gated viewing sensors have a minimal temporal resolution of around 2 ns due to sensor gate widths. Single photon counting devices have higher sensitivity and higher temporal resolution.
The bidirectional texture function (BTF) has proven a valuable model for the representation of complex spatially varying material reflectance. Its image-based nature, however, makes material BTFs extremely cumbersome to acquire: in order to adequately sample high-frequency details, many thousands of images of a given material as seen and lit from different directions have to be obtained. Additionally, long exposure times are required to account for the wide dynamic range exhibited by the reflectance of many real-world materials.
We propose to significantly reduce the required exposure times by using illumination patterns instead of single light sources ("multiplexed illumination"). A BTF can then be produced by solving an appropriate linear system, exploiting the linearity of the superposition of light. Where necessary, we deal with signal-dependent noise by using a simple linear model derived from an existing database of material BTFs as a prior. We demonstrate the feasibility of our method for a number of real-world materials in a camera dome scenario.
Numerous applications in computer graphics and beyond benefit from accurate models for the visual appearance of real-world materials. Data-driven models like photographically acquired bidirectional texture functions (BTFs) suffer from limited sample sizes enforced by the common assumption of far-field illumination. Several materials like leather, structured wallpapers or wood contain structural elements on scales not captured by typical BTF measurements. We propose a method extending recent research by Steinhausen et al. to extrapolate BTFs for large-scale material samples from a measured and compressed BTF for a small fraction of the material sample, guided by a set of constraints. We propose combining color constraints with surface descriptors similar to normal maps as part of the constraints guiding the extrapolation process. This helps narrowing down the search space for suitable ABRDFs per texel to a large extent. To acquire surface descriptors for nearly at materials, we build upon the idea of photometrically estimating normals. Inspired by recent work by Pan and Skala, we obtain images of the sample in four different rotations with an off-the-shelf flatbed scanner and derive surface curvature information from these. Furthermore, we simplify the extrapolation process by using a pixel-based texture synthesis scheme, reaching computational efficiency similar to texture optimization.
Many computer vision tasks are hindered by image formation itself, a process that is governed by the so-called plenoptic
integral. By averaging light falling into the lens over space, angle, wavelength and time, a great deal of information is
irreversibly lost. The emerging idea of transient imaging operates on a time resolution fast enough to resolve non-stationary
light distributions in real-world scenes. It enables the discrimination of light contributions by the optical path length from
light source to receiver, a dimension unavailable in mainstream imaging to date. Until recently, such measurements used
to require high-end optical equipment and could only be acquired under extremely restricted lab conditions. To address
this challenge, we introduced a family of computational imaging techniques operating on standard time-of-flight image
sensors, for the first time allowing the user to “film” light in flight in an affordable, practical and portable way. Just as
impulse responses have proven a valuable tool in almost every branch of science and engineering, we expect light-in-flight
analysis to impact a wide variety of applications in computer vision and beyond.
Conference Committee Involvement (5)
Optoelectronic Imaging and Multimedia Technology V
11 October 2018 | Beijing, China
Optoelectronic Imaging and Multimedia Technology IV
12 October 2016 | Beijing, China
Measuring, Modeling, and Reproducing Material Appearance 2015
9 February 2015 | San Francisco, California, United States
Optoelectronic Imaging and Multimedia Technology III
9 October 2014 | Beijing, China
Measuring, Modeling, and Reproducing Material Appearance
3 February 2014 | San Francisco, California, United States