3D sensing devices are becoming increasingly prevalent in robotics, self-driving cars, human-computer interfaces, as well as consumer electronics. Recent years have seen single-photon avalanche diodes (SPADs) emerging as one of the key technologies underlying 3D time-of-flight sensors, with the capability to capture accurate 3D depth maps in a range of environmental conditions, and with low computational overhead. In particular, direct ToF SPADs (dToF), which measure the return time of back-scattered laser pulses, form the backbone of many automotive LIDAR systems. We here consider an advanced direct ToF SPAD imager with a 3D-stacked structure, integrating significant photon processing. The device generates photon timing histograms in-pixel, resulting in a maximum throughput of 100's of giga photons per second. This advance enables 3D frames to be captured at rates in excess of 1000 frames per second, even under high ambient light levels. By exploiting the re-configurable nature of the sensor, higher resolution intensity (photon counting) data may be obtained in alternate frames, and depth upscaled accordingly. We present a compact SPAD camera based on the sensor, enabling high-speed object detection and classification in both indoor and outdoor environments. The results suggest a significant potential in applications requiring fast situational awareness.
3D-imaging is used in a wide range of applications such as robotics, computer interfaces, autonomous driving or even capturing the flight of birds. Current systems are often based on stereoscopy or structured light approaches, which impose limitations on standoff distance (range), and require textures in the scene or accurate projection patterns. Furthermore, there may be significant computational requirements for the generation of 3D maps. This work considers a system based on the alternative approach of time-of-flight. A state-of-the art single-photon avalanche diode (SPAD) image sensor is used in combination with pulsed, flood-type illumination. The sensor generates photon timing histograms in pixel, achieving a photon throughput of 100’s of Gigaphotons per second. This in turn enables the capture of 3D maps at frame rates >1kFPS, even in high ambient conditions and with minimal latency. We present initial results on processing data frames from the sensor (in the form of 64×32, 16-bin timing histograms, and 256×128 photon counts) using convolutional neural networks, with the view to localize and classify objects in the field of view, with low latency. In tests involving three different hand signs, with data frames acquired with millisecond exposures, a classification accuracy of >90% is obtained, with histogram-based classification consistently outperforming intensity based processing, despite the former’s relatively low lateral resolution. The total, GPU-assisted, processing time for detecting and classifying a sign is under 25 ms. We believe these results are relevant to robotics or self-driving cars, where fast perception, exceeding human reaction times is often desired.
Single-photon detector array technologies have advanced significantly in recent years. Cameras now exist that are not only sensitive to single photons but the individual pixels in the sensor provide photon time-of-arrival information the picosecond regime. Such unprecedented sensitivity and temporal resolution opens up a number of exiting new applications, such as light-in-flight imaging, looking around corners with laser echoes, and seeing through dense scattering media. I will discuss the recent developments of the camera technology and discuss our latest results. I will give details of our latest field trials, where we have been using single-photon detector array sensors to see through fog and smoke. I will also discuss our latest results for high-speed imaging in three dimensions. The latest sensor is able to capture 3D data at frame rates greater than 1000 frames per second. This technology is relevant for the analysis of rapidly changing systems where three dimensional information is necessary.
Since its first demonstration in 1995, ghost imaging has provided amazing insights into both classical and quantum physics as well as having found application in, for example, microscopy and imaging under low light conditions. Traditional ghost imaging uses correlations between two photons to reconstruct an image of an object from two systems which each individually know nothing about the object. In the quantum case, the state of the two photons is typically a symmetric, entangled state. Here we investigate the effect that changing the two-photon state's symmetry has on the reconstructed object, by using Dove prisms and a Hong-Ou-Mandel filter. Interestingly, it appears that post-selecting on the anti-symmetric Bell state results in a `double image': a juxtaposition of the original image rotated both clockwise and anti-clockwise. Furthermore, we consider a 4-photon experiment in which two photons, which originate from different entanglement sources and are hence completely independent initially, acquire correlations by way of entanglement swapping via appropriate post-selection on the remaining two photons. In such a setup, post-selecting on the symmetric Bell states results in the original object, but post-selecting on the anti-symmetric Bell state results in a contrast-reversed image of the object. These studies highlight the fundamental importance that state symmetry plays in quantum imaging.
Natural and man-made obscurants like fog, cloud, smoke and dust are an impediment to the conduct of military operations, preventing effective pilotage, denying the ability to carry out surveillance and reconnaissance, and restricting situational awareness. Additionally, there is a growing interest in the ability to penetrate haze and fog for the safe navigation of autonomous vehicle applications.
There are several electro-optic technologies that offer improved ability to image through obscurants [1,2]. In this study the authors assessed 4 different active imaging technologies in the presence of an artificial smoke, and obtained 3D imagery of targets at ranges of 100m out to 1400m. The four systems tested were:
• a scanned time-correlated single photon counting (TCSPC) sensor using a InGaAs/InP single-photon avalanche diode (SPAD) detector operating at ~ 1.55 µm ;
• a 32 32 InGaAs/InP SPAD array using TCSPC at ~ 1.55 µm;
• a coherent frequency modulated continuous wave (FMCW) scanned lidar system ~ 1.55 µm , ;
• a CMOS SPAD array camera operating as a time gated imager operating at ~ 670nm.
The selection of sensors enables comparisons to be drawn between scanning and staring systems and direct detection and coherent detection, and between short-wave infrared and visible wavelengths.
Three-dimensional structured targets were placed at ranges of 100 – 150m and smoke was introduced between the targets and the sensors. The smoke transmission was measured with a separate laser device to correlate the imagery with the level of attenuation presented by the smoke and thereby relate the image quality to the degree of optical loss in the system. For the coherent lidar system, long range 3D images were obtained out to a distance of 1400m, and imaging through smoke of a target at 900m was achieved. Under the test conditions at least 2 of the systems have demonstrated the ability to obtain images through greater than 4 attenuation lengths of obscurant between transceiver and target, and work is progressing on image processing approaches to reconstruct images at greater levels of loss.
Imagery from the systems will be presented, the relative merits of the different techniques discussed, and the prospects for future practical systems will be explored.
 “Demonstration of frequency modulated continuous wave (FMCW) eye-safe, coherent LIDAR to See Through Clouds”, M.Silver, P.Feneyrou, L.Leviander, A.Martin and J Parsons, Optro, Jan 2018.
 “Depth imaging through obscurants using time-correlated single-photon counting”, R.Tobin, A.Halimi, A.McCarthy, M.Laurenzis, F.Christnacher and G.S.Buller, SPIE Vol 10659, April 2018
Technology at the quantum limit promises significant advances in computing, communication, sensing and metrology, and imaging. The UK and many other countries around the world have recently provided significant investment in the development and realisation of such quantum technologies. In this talk, I will highlight recent activities in applied and fundamental quantum science, specifically focussing on advances in imaging, and metrology.
Much of our work relies on the detection of single photons via single-photons detectors, either in single-point or array formats. Single-photon sensitive detector (SPAD) arrays offer unprecedented sensitivity to light and picosecond temporal resolution, with the main advantage that they provide instantaneous data across their many pixels. I will discuss recent measurements that demonstrate sub-centimeter depth measurements with a visible CMOS SPAD sensor at long ranges. The system is based on a visible pulsed illumination system at 670 nm and a 320 by 240 pixel SPAD array sensor. The camera operates in a gated detection mode, and depth information is gained by taking multiple images at different gate delays. After processing, we are able to achieve sub-centimeter resolution in all three spatial dimensions at a distance of 150 meters. This work demonstrates the capability of such sensors at measuring depth at long distances and illustrates the potential for extremely high resolution imaging at distance.
Advances in LIDAR-based methods have enabled the detection and reconstruction of images of static objects hidden from the direct line-of-sight [1, 2]. One of the drawbacks to the technology used in these demonstrations is the requirement for long acquisition times. More recently, Gariepy et al. have shown that it is possible to detect and track a moving hidden object, albeit with no information of the object’s form . Applications of this include, but are not limited to, search and rescue, and hazard detection.
We present a real-time tracking system that enables the detection of moving objects that are outside the direct line-of-sight. Our active imaging system is a single-pixel variant of the technology reported by Gariepy et al. It replaces the single-photon avalanche diode (SPAD) camera of 1024 pixels with a number of SPAD detectors to detect light back-scattered from the hidden object. The flexibility of the single-pixel detectors provides an increased field of view, allowing us to detect and simultaneously track with better precision with respect to a SPAD array. The use of single-pixel detectors also has the advantage of a high detection efficiency.
We perform two proof-of-concept experiments using three pixels and a single pulsed laser to interrogate a “room” for a hidden object. In the first experiment, we demonstrate that we can accurately locate the position of a hidden object. In the second experiment, we use the same system and demonstrate that we can accurately track the motion of a hidden object in real time.
The “room” is a purpose-built box measuring 102×102×77 cm. Optical access is provided by a 28×12 cm window. The target object is a 15×15 cm textured viewing screen that we move along a designated ground track outside the line-of- sight of our system. In our experiments, we send a train of light pulses through the window to the back of the room. The pulses scatter off the wall as a spherical wavefront that propagates in all directions. Some of this light reaches our hidden object and is scattered back again towards the rear wall where we image our three SPAD pixels. The SPAD detectors are capable of picosecond temporal resolutions. Our time-correlated single-photon counting system measures the photon arrival times (64 ps resolution) for the signal returning to each detector. A histogram is built up in one second of acquisition time over 80 million pulses. We use this temporal information in our target position retrieval of the hidden object.
We place the object at 11 positions in turn in a seven minute experiment, and localise its position. We then perform real-time tracking and move the object around the hidden scene for approximately one minute, processing the target position retrieval every 1.5 s.
The recent development of 2D arrays of single-photon avalanche diodes (SPAD) has driven the development of applications based on the ability to capture light in motion. Such arrays are composed typically of 32x32 SPAD detectors, each having the ability to detect single photons and measure their time of arrival with a resolution of about 100 ps. Thanks to the single-photon sensitivity and the high temporal resolution of these detectors, it is now possible to image light as it is travelling on a centimetre scale. This opens the door for the direct observation and study of dynamics evolving over picoseconds and nanoseconds timescales such as laser propagation in air, laser-induced plasma and laser propagation in optical fibres. Another interesting application enabled by the ability to image light in motion is the detection of objects hidden from view, based on the recording of scattered waves originating from objects hidden by an obstacle. Similarly to LIDAR systems, the temporal information acquired at every pixel of a SPAD array, combined with the spatial information it provides, allows to pinpoint the position of an object located outside the line-of-sight of the detector. A non-line-of-sight tracking can be a valuable asset in many scenarios, including for search and rescue mission and safer autonomous driving.
The ability to detect motion and to track a moving object that is hidden around a corner or behind a wall
provides a crucial advantage when physically going around the obstacle is impossible or dangerous. One recently
demonstrated approach to achieving this goal makes use of non-line-of-sight picosecond pulse laser ranging.
This approach has recently become interesting due to the availability of single-photon avalanche diode (SPAD)
receivers with picosecond time resolution. We present a time-resolved non-sequential ray-tracing model and its
application to indirect line-of-sight detection of moving targets. The model makes use of the Zemax optical
design programme's capabilities in stray light analysis where it traces large numbers of rays through multiple
random scattering events in a 3D non-sequential environment. Our model then reconstructs the generated
multi-segment ray paths and adds temporal analysis. Validation of this model against experimental results is
shown. We then exercise the model to explore the limits placed on system design by available laser sources and
detectors. In particular we detail the requirements on the laser's pulse energy, duration and repetition rate, and
on the receiver's temporal response and sensitivity. These are discussed in terms of the resulting implications
for achievable range, resolution and measurement time while retaining eye-safety with this technique. Finally,
the model is used to examine potential extensions to the experimental system that may allow for increased
localisation of the position of the detected moving object, such as the inclusion of multiple detectors and/or
There has been a considerable effort recently in the development of planar chiral metamaterials. Owing to the lack of inversion symmetry, these materials have been shown to display interesting physical properties such as negative index of refraction and giant optical activity. However, the biosensing capabilities of these chiral metamaterials have not been fully explored. Ultrasensitive detection and structural characterization of proteins adsorbed on chiral plasmonic substrates was demonstrated recently using UV-visible circular dichroism (CD) spectroscopy. Second harmonic generation microscopy is an extremely sensitive nonlinear optical probe to investigate the chirality of biomaterials. In this study, we characterize the chiral response of chiral plasmonic metamaterials using second harmonic generation microscopy and CD spectroscopy. These planar chiral metamaterials, fabricated by electron-beam lithography, consist of right-handed and left-handed gold gammadions of length 400 nm and thickness 100nm, deposited on a glass substrate and arranged in a square lattice with a periodicity of 800nm.
Using an electron multiplying CCD camera we observe both image plane (position) and far field (momentum) correlations between photon pairs produced from spontaneous parametric down-conversion when using a 201 x 201 bi-dimensional array of pixels and a flux of around 0.02 photons/pixel. After background subtraction we characterize the strength of signal and idler correlations in both transverse dimensions by applying entanglement and EPR criteria, showing good agreement with the theoretical predictions. The application of such devices in quantum optics could have a wide range, including quantum computation with spatial degrees of freedom of single photons.
The information carried by a photon can be encoded in one or more of many different degrees of freedom. Beyond the two-dimensional space of polarisation (spin angular momentum) our interest lies in the unbounded yet discrete state space of Orbital Angular Momentum (OAM). We examine how photon pairs can be generated and measured over a large range of OAM states.
We have developed a new approach to measuring the spatial position of a single photon. Using fibers of different
length, all connected to a single detector allows us to use the high timing precision of single photon avalanche diodes
(SPAD) to spatially locate the photon. We have built two 8-element detector arrays to measure the full-field quantum
correlations in position, momentum and intermediate bases for photon pairs produced in parametric down conversion.
The strength of the position-momentum correlations is found to be an order of magnitude below the classical limit.
We report a violation of the CHSH inequality for ghost-images. This is achieved by using two spatially separated
phase modulators within the context of a two-photon parametric down-conversion experiment. We obtain edge
enhanced images as a direct consequence of the quantum correlations in the orbital angular momentum (OAM)
of the down-converted photon pairs. For phase objects, with differently orientated edges, we show a violation of
the CHSH Bell-type inequality for an OAM subspace, thereby unambiguously revealing the quantum nature of
our ghost-imaging arrangement.
The angular profile and the orbital angular momentum of a light mode are related by Fourier transform. Any
modification of the angular distribution, e. g. via diffraction off a suitably programmed spatial light modulator,
influences the orbital angular momentum spectrum of the light. This holds true even at the single photon level.
We observe the influence of various angular masks on the orbital angular momentum spectrum, both in the near
and the far field, and describe the resulting patterns in terms of angular diffraction. If photons are entangled in
their orbital angular momentum, diffraction of one photon affects the orbital angular momentum spectrum of
its partner photon, and angular ghost diffraction can be measured in the coincidence counts. We highlight the
role of the angular Fourier relationship for these effects.
The Brownian dynamics of an optically trapped water droplet is investigated across the transition from over to
under-damped oscillations. The spectrum of position fluctuations evolves from a Lorentzian shape typical of overdamped
systems (beads in liquid solvents), to a damped harmonic oscillator spectrum showing a resonance peak.
In this later under-damped regime, we excite parametric resonance by periodically modulating the trapping
power at twice the resonant frequency. We also derive from Langevin dynamics an explicit numerical recipe
for the fast computation of the power spectra of a Brownian parametric oscillator. The obtained numerical
predictions are in excellent agreement with the experimental data.
In the 1970s, Jones demonstrated a photon drag by showing that the translation of a window caused a slight displacement
of a transmitted light beam. Similarly he showed that a spinning medium slightly rotated the polarization state. Rather
than translating the medium, the speed of which is limited by mechanical considerations, we translate the image and
measure its lateral delay with respect to a similar image that has not passed through the window. The equivalence, or
lack of it, of the two frames is subtle and great care needs to be taken in determining whether or not similar results are to
Holographic or diffractive optical components, such as a spatial
light modulator (SLM), can be used in optical tweezers for the
creation of multiple and modified optical traps. In addition to
this, SLMs can also be used to correct for aberrations within the
optical train resulting in an improved trapping performance.
Typically an electrically addressed SLM may deviate from flatness
by up to 4λ, dominated by astigmatism due to the overall
curvature of the SLM surface. This astigmatism may be corrected by
adding the appropriate hologram to the SLM display resulting in a
dramatic improvement in the fidelity of the focussed spot. The
impact that this correction has on the performance of the optical
trap is most noticeable for small particles. For the SLM used in
this study, the improvement in trap performance for a 0.8 μm
diameter particles can be in excess of 25%. However, for 5 μm
diameter particles our results show an improvement of less than
0.5%. This dependence upon particle size is most probably
associated with the relative size of the PSF and the trapped
particle. Once the PSF is significantly smaller than the particle
diameter, further reduction brings little improvement in trap
We demonstrate a technique for the multi-point measurement of fluid flow in microscopic geometries. The
technique consists of an array of microprobes can be simultaneously trapped and used to map out the fluid flow
in a microfluidic device. The optical traps are alternately turned on and off such that the probe particles are
displaced by the flow of the surrounding fluid and then re-trapped. The particles' displacements are monitored by
digital video microscopy and directly converted into velocity field values. The techniques described have potential
to be extended to drive an integrated lab-on-chip device, where pumping, flow measurement and optical sensing
could all be achieved by structuring a single laser beam.
Central to the success of microfluidic systems has been the development of innovative methods for the
manipulation of fluids within microchannels. We demonstrate a method for generating flow within a
microfluidic channel using an optically driven pump. The pump consists of two counter rotating birefringent
vaterite particles trapped within a microfluidic channel and driven using optical tweezers. The transfer of spin
angular momentum from a circularly polarised laser beam rotates the particles at up to 10 Hz. We show the
that the pump is able to displace fluid in microchannels, with flow rates of up to 200 μm3 s-1 (200 fL s-1). The direction of fluid pumping can be reversed by altering the sense of the rotation of the vaterite beads. We also
incorporate a novel optical sensing method, based upon an additional probe particle, trapped within separate
optical tweezers, enabling us to map the magnitude and direction of fluid flow within the channel. The
techniques described in the paper have potential to be extended to drive an integrated lab-on-chip device,
where pumping, flow measurement and optical sensing could all be achieved by structuring a single laser
We use holographic optical tweezers to create and monitor the liquid flow within a micro-fluidic device. Using the tweezers to both trap and spin micron-sized beads within a 10-20 micron wide channel creates a fluid flow of the order of 200 cubic microns/sec. We also use the optical tweezers to measure the fluid flow by trapping and releasing probe particles that are imaged with high temporal and spatial resolution. Using the multi-trap capability of the holographic optical tweezers we measure the transverse fluid velocity at many positions simultaneously with an accuracy of better than 1 micron/sec. Such studies are highly pertinent to lab-on-chip systems for various applications and studies within the biosciences.
We have developed an interactive user-interface that can be used to generate phase holograms for use with spatial light modulators. The program utilises different hologram design techniques allowing the user to select an appropriate algorithm. The program can be used to generate multiple beams, interference patterns and can be used for beam steering. We therefore see a major application of the program to be within optical tweezers to control the position, number and type of optical traps.
We use holographic optical tweezers to trap multiple micron-sized objects and manipulate them in 3-dimensions. Trapping multiple objects allow us to create 3-dimensional structures, examples of which include; simple cubes which can be rotated or scaled, complex crystal structures like the diamond lattice or interactive 3-dimensional control of trapped particles anywhere in the sample volume.