Translator Disclaimer
18 February 2021 Windowed region-of-interest non-uniformity correction and range walk error correction of a 3D flash LiDAR camera
Author Affiliations +
Abstract

We present experimental methods and results for photo-response non-uniformity correction (PRNUC) in range for a 3D flash LiDAR camera from non-optimal static-scene calibration data. Range walk is also corrected. This method breaks up the camera’s focal plane array (FPA) into 16  ×  16 windowed regions of interest that are incrementally captured and stitched together in post-processing across the entire FPA. The illumination was not uniform, thus requiring additional methods described by our paper to create an acceptable correction. We present the results from a full non-uniformity correction and range walk error correction processed for a set of independently collected validation frames; these validation frames used identical experimental conditions and the same target as was collected for the corrections. We will show that this experimental approach improves range accuracy and range precision of the corrected validation frames despite the sub-optimal conditions of the data used to compute the corrections; the single shot range precision is corrected to 33 cm, as compared to a modeled precision of 15.65 cm, while the accuracy is corrected to 252 cm. This method has implications for simplification of characterization of non-uniformity and range walk error, and its subsequent correction, in 3D flash LiDAR cameras.

1.

Introduction

We present experimental and computational methods capable of correcting for pixel-response non-uniformity correction (PRNUC) in intensity and range returns, enabling range walk error correction, using windowed region of interest (WROI), high frame rate data capture of a static, non-optical scene. The method uses data captured from a 128×128 InGaAs p-i-n diode 3D flash LiDAR camera’s WROI mode of operation, with a 16×16 window at 10-kHz framing rate. This data had significant variations in illumination, including differences in the mean level of illumination on each region of interest. We correct dark-frame non-uniformity and then photo-response non-uniformity in both intensity and range, leading to a range walk error correction using the corrected frames. The experimental methods for characterizing the dark-frame non-uniformity correction (DFNUC) in intensity and range were identical, with both intensity and range returns being captured from the same 3D flash LiDAR camera; characterization of the PRNUC and range walk error was performed simultaneously. Photo-response data were computed as the median frame of the characterized range walk error, thus further streamlining data collection. This method processes the PRNUC correction independently for every region of interest captured, rather than globally for the entire full frame. For the non-uniformity corrected validation data, we show that the range precision and range accuracy are significantly improved; the range walk error corrected data significantly decreases range inaccuracy, while the range precision maintains significant improvements over the NUC corrected data.

Previous work by NASA-Langley and others have provided the background for the experimental basis of our work as presented in this paper. NASA-Langley and others have described methods for characterizing a 3D flash LiDAR system, including range walk and non-uniformity.15 Photo-response non-uniformity correction (PRNUC) for a 2D flash LiDAR camera typically involves using an integrating sphere to create a uniform field across the entire focal plane array (FPA). However, a 3D flash LiDAR camera operates by measuring the time-of-flight of the laser pulse from source, reflected off the target, and back to the FPA. The time-of-flight information of the pulse would be destroyed by the integrating sphere typically used for 2D flash LiDAR cameras. As such, the 3D flash LiDAR camera requires capturing calibration data of a target scene downrange, or direct, detector-by-detector characterization of the entire FPA. Work on PRNUC to date has focused on characterization of the FPA through characterizing individual detectors. This research provides experimental methods to circumvent the initial dead range of the 3D flash LiDAR and further capture data that can be used toward non-uniformity correction and range walk error correction of the camera.

1.1.

3D Flash LiDAR

The camera used in this research was a Voxtel VOX3D flash LiDAR camera. The camera is a SWIR, InGaAs p-i-n diode 3D flash LiDAR camera that provides 128×128 intensity and range returns with greater than 80% quantum efficiency. A 3D flash LiDAR is a LiDAR system that flood illuminates a scene, capturing a broader area at once, than a scanning LiDAR.6,7 3D flash LiDAR systems typically employ time-of-flight ranging. Through timing synchronization with a laser source, the LiDAR can estimate the time-of-flight of the illumination downrange so long as that object is within its range gate. It is noteworthy that the range gate is analogous to the shutter speed of a passive camera and can be described in units of time, or equivalently, distance.8,9 The range gate describes the gated time the camera is set to respond to a return, such that there will only be a valid time-of-flight measurement within that period of time. For instance, in the case of this research, the range gate used was 2  μs, equivalent to gating over 300 m of range. The Voxtel VOX3D flash LiDAR camera can operate with range gates of 1  μs to 4  μs.

This research used a direct-detection, bistatic LiDAR, with a small bistatic angle, using pulsed, wide field of view (FOV) laser illumination used to illuminate the scene without the requirement of scanning.7,8,10 The camera is a 3D direct-detection p-i-n LiDAR camera and thus directly detects intensity and time-of-flight information for range using a range-gated system and timing information when synchronizing the laser to the camera.8,10 Direct-detection LiDAR systems, in contrast to spatial or temporal heterodyne systems, directly detect the intensity of the fields incident upon the detector, without any ability to detect phase differences.7 In the current setup, direct detection of range through time-of-flight detection on a per detector basis of the return signal is capable of providing range information across the full 128×128 FPA.3

The Voxtel camera synchronizes to a laser by means of externally triggering the laser using a timing signal from the camera. The timing on this trigger signal to the laser can be delayed relative to the camera timing, allowing for the delay of the start of the laser pulse until deeper into the range gate of the camera. This capability was utilized for this research as the camera only responds to ranges greater than the dead time of the camera. Also, the Voxtel camera can frame at nominally a maximum of 739 Hz while outputting full, 128×128 frames; however, the camera is also capable of windowing down to a 16×16 region of interest, operating at up to 24-kHz framing rate. For this work, we framed at 10 kHz in order to externally trigger the laser. This WROI mode of operation imparted additional noise on the return beyond what was seen using the full frame, which needed to be accounted for in post-processing.9 The VOX3D flash LiDAR camera used in this research produces two separate sets of data: one set for intensity and one for time-of-flight range information. For the purpose of this research, intensity refers to the intensity return of this camera, and range refers to the time-of-flight return of this camera; the intensity return has been converted to photon counts, while the range return has been converted to meters.

1.2.

Non-Uniformity

Traditional 2D cameras have non-uniformities in intensity that needs to be corrected; considerations for the time-of-flight of the laser pulse do not need to be considered, as timing information, and subsequently range, are not a consideration. Non-uniformity correction in a 3D flash LiDAR camera, therefore, requires considering parameters not of concern for a 2D camera. Characterization of the non-uniformities and range walk error, while typically using an integrating sphere, now cannot use the traditional industry standard; integrating spheres will destroy timing information, which adds a layer of complication to the development of experimental procedures for non-uniformity correction and range walk error correction. The experimental methods presented in this paper enable the characterization and correction of non-uniformities in both intensity and range from a 3D flash LiDAR camera. These methods are presented in greater detail in Sec. 2.

Non-uniformity in both intensity and range measurements is prevalent in 3D flash LiDAR FPAs, including the FPA used in this research. Dark-frame non-uniformities, also called fixed pattern noise, can display as the sum of dark offset and bias,11 while photo-response non-uniformities are caused by detector gain errors. Dark-frame non-uniformities for both intensity and range are triggered by imperfections in the CMOS readout, which often displays as column-wise spatial non-uniformities. Because the time-of-flight, range return is read out from the detection of the intensity return, the concept of detector gain error for the range return is somewhat misleading. The gain errors are in fact not directly related to detection but rather the conversion between intensity and range information, provided by the timing conversion ramp; as such, for the range return, photo-response non-uniformity is better described as timing conversion gain errors, rather than detector gain error as in the intensity return. These key differences between the readouts of the range and intensity returns further contribute to ensuring a different DFNUC in both cases.

Non-uniformities can also be significantly altered through thermal effects, causing drift in the non-uniformity. This phenomenon can require cooling or temperature stabilization of the camera, or a separate temperature-dependent correction. This paper assumes that the temperature is stable after a significant enough time has elapsed in laboratory conditions and does not proceed to correct the thermal effects for the uncooled p-i-n diode FPA.

DFNUC is the first step of a correction process which typically also involves correcting PRNUC and range walk errors.7,8,11,12 The DFNUC removes an estimation of the column amplification offset-induced fixed pattern noise, while pixel response non-uniformity correction optimizes an estimation of the fixed pattern noise induced by amplification gain to illumination back to the appropriate value, in the case of the p-i-n diode FPA, a gain equal to one. There is also a weak correlation between increasing intensity and error in range accuracy, known as range walk. Range walk accounts for timing differences between pulses of differing intensities in a detection thresholding system. Range walk is another parameter that requires correction in a 3D sensor system, such as a 3D flash LiDAR camera. It is noteworthy that the correction process for non-uniformity is applied the same to the intensity and range returns; while some modification of the experimental process was required to acquire frames for a 3D flash LiDAR camera for non-uniformity corrections, the processes remain otherwise identical.

This paper focuses on eliminating both sources for non-uniformity in range and intensity returns. The camera used in this experiment was a p-i-n diode architecture camera, thus the data collected for PRNUC should have a mean approximately equal to one.13 The data collected for DFNUC contain the sum of the bias and offset, Dx,y, also known as fixed pattern noise. This offset, Ox,y, is centered around the dark bias, B, in units of photons or meters, depending on the return,

Eq. (1)

Dx,y=Ox,y+B,
a value which was calculated for the same camera in the research presented at SPIE DCS 20188 for intensity and range (see Table 1) in the 61-MHz mode of operation of the camera. For this research, the values are determined for a 240-MHz optical bandwidth mode of operation of the camera.

Table 1

Basic parameters for intensity and range returns on VOX3D flash LiDAR camera.

Intensity (photons)Range (m)
Dark noise16.530.063
Dark bias413.0729.92
Dynamic range3966.90249.08

The typical PRNUC process involves the ratio of the difference of the photo-response frame, Px,y, and the dark frame, Dx,y, and the difference of their respective means, P¯x,yD¯x,y,

Eq. (2)

Gx,y=P¯x,yD¯x,yPx,yDx,y
which provides the gain per detector on the FPA, provided that the illumination is uniform across the entire FPA. The mean values, P¯x,y and D¯x,y, were computed across the entire 128×128 averaged frame for intensity and range returns.

The full non-uniformity correction process itself requires both the DFNUC, and the PRNUC. The DFNUC is subtracted from the raw frame data, while the PRNUC is divided from the DFNUC corrected frame data, to create the restored image, Cx,y,

Eq. (3)

Cx,y=(Yx,yDx,y)/Gx,y,
where Yx,y is the original, raw image, Dx,y is the DFNUC, and Gx,y is the PRNUC. The experimental methods for finding a DFNUC and PRNUC will be described in greater detail further in this paper.9

1.3.

Range Walk

Range walk error is the characterization of how much the range return will shift when the intensity of the return is varied. This shift happens in threshold triggering systems, where the time-of-flight circuit is triggered earlier with a more intense return. This results in returned value for the range “walking” further downrange for a weak return just barely over threshold. The range walk error, provided by the difference between the true range (in meters), Rx,y, and the non-uniformity corrected, measured range return,  Rx,y', ERx,y(Φx,y)=Rx,yRx,y, is related to the non-uniformity corrected intensity return, Φx,y (in photons),

Eq. (4)

ERx,y(Φx,y)=ax,yΦx,ybx,y,
where ax,y and bx,y are fit parameters. Thus, the true range can be determined as

Eq. (5)

Rx,y=Rx,y+ERx,y(Φx,y),
which provides a general range walk error correction algorithm. This correction process has been described by He et al.14 in 2013. The process described here is modified to use data collected using multiple regions of interest without uniform illumination. By combining the processes together, a greater improvement in both range accuracy and range precision is achievable than with just a single set of corrections.

The data collected in this research were applicable for both PRNUC and range walk error correction. By using a variable attenuator that was constructed onsite, the incident beam was varied over 24.23% of the dynamic range in the intensity return. This data were used to determine the trend in range walk, while the mean of this data, per detector, was used to generate a set of PRNUC frames. Significant improvements in the quality of the corrections and simplification of the process used to collect the data resulted from this research.

1.4.

Metrics of Performance

Informative metrics of performance and quality are important in any research describing the results of corrective processes of sensor systems, including LiDAR systems. For topics related to time-of-flight returns, i.e., “range” returns, it should be noted that the quality metrics used in this paper will be range precision and range accuracy. Range precision is described by the standard deviation of the return; range precision is a description of uncertainty in measurement.

Range precision will be used as a quality metric throughout, but most importantly on non-uniformity corrections and resulting measurements. Relative range precision comparisons are important to note; the range precision should decrease toward a theoretical minimum as more optimal corrections are applied. Range precision is an important quality metric for range walk, as well, however so is range accuracy.

Range accuracy, on the other hand, is useful for comparing results from range walk corrected returns to returns that have not been corrected for range walk. Range accuracy is described by the mean value of the return relative to the expected, or measured, value; range accuracy is a description of expectation in measurement. As the primary error associated with range walk is range accuracy, the metric is significantly informative to the performance of the correction. However, both range precision and range accuracy should improve with each successively applied correction, including non-uniformity correction, and then range walk error correction.

2.

Methods

Here, we describe the experimental methods used to capture the data and then the methods used in post-processing the data. The experimental methods are used to collect dark frames and illuminated calibration frames for post-processing into a PRNUC. This section outlines the experimental procedure. The NUC was post-processed in two steps; first, the DFNUC in intensity and range was computed from a set of dark frames, and next, the PRNUC in intensity and range was computed using a set of frames collected of a target downrange. The frames collected for the PRNUC were then used to compute the range walk error correction, as well, after being corrected for non-uniformities.

We captured data in 16×16 regions of interest with non-uniform illumination covering, incrementally, the full 128×128 FPA; each region of interest captured had varying levels of incident illumination. This sub-optimal calibration data were post-processed using DFNUC and PRNUC in intensity and range, enabling a range walk error correction. Ideally in calibration setups, an integrating sphere is used for characterizing the nonuniformities; however, timing information is lost due to pulse averaging in integrating spheres, which precluded the usage of an integrating sphere for this research. While the intensity data could have been captured in this manner, in order to process the data for range walk error correction, both the intensity and range returns need to be from the same capture sequence; as such, we followed procedures specific to range non-uniformity correction data collection. Another approach is to directly illuminate a single detector at a time on the FPA to characterize the response; while a fairly standard procedure in the field, the process incurs a significant time penalty to characterize the FPA and generate a correction table. The process tested in this paper characterizes the FPA using regions of interest rather than single detectors, thus increasing the speed of the process. Data collection for the correction processes was conducted within 30 h while data collection for validation frames was conducted within 5 h, once the experimental design and setup were worked through.

At SPIE Defense and Commercial Sensing, 2018, full-frame DFNUC and limited range walk characterization of a directly illuminated section of the FPA was demonstrated for this same camera for timing and signal at a detector bandwidth of 61 MHz.8 In this paper, the research is expanded to intensity and range PRNUC for a detector bandwidth of 240 MHz, toward the goal of characterization of range walk across the full FPA. For this paper, a commercial off the shelf, tabletop optics design was used for most of the experimental setup. Additional methods developed for this effort, and presented at SPIE Defense and Commercial Sensing, 2019,9 demonstrate the experimental capability to capture data at high frame rate in a WROI setting. The experimental procedure for this work is largely analogous to the SPIE Defense and Commercial Sensing, 2019, proceedings paper; however, the procedure described as follows expands the work of that paper beyond a single region of interest, and to the specific purpose of non-uniformity correction and range walk error correction under similar operating conditions. A description of the methods follows.

2.1.

Dark Frames

Dark frames were collected on the camera using different operational settings than described in the 2018 SPIE DCS paper. Specifically, dark frames were collected across 64 individual 16×16 regions of interest that collectively make up the FPA, at framing rate of 10 kHz with the scene stitched, subsequently, into a single 128×128 dark frame. In all cases, the camera requires thermal stability before running corrections. Thermal stability implies at least a 1-h waiting period while the camera is powered on before beginning experiments, to allow the internal components, which are not cooled, to heat to their stabilization point. Later experiments with this camera will have thermo-electric cooling and characterization of thermal variations in non-uniformity, and thus not require waiting for thermal stability. For all dark frame experiments, the lens and cap were kept on, and all openings in the camera were sealed shut to prevent ambient sources of illumination from affecting results.

Previous research provided a dark bias value,8 allowing for dark frames to be collected without adjusting biasing levels on the camera. Therefore, the dark non-uniformity frames were adjusted from the default null return value, which averages to approximately near the end of the range gate, to where the mean value is equal to that found of the dark frames presented at SPIE DCS 2018.8 This correction process was applied to both intensity and range returns.

2.2.

Photo-Response and Range Walk Frames

To correct for the PRNUC of this camera in range and intensity, a flat, scattering target was constructed at a range of 1.18 m, located on the same optical tabletop as the LiDAR system. While this was nominally within the dead timing zone of the camera, a delay was placed between the camera T0 and the laser external trigger of 1.7  μs, which had the effect of pushing back the perceived range by 137.33 m within the 300-m range gate after factoring in the other system delays (Fig. 1). This was largely done to provide a greater return value in range to assist with correcting photo-response non-uniformity, by partially compensating for range walk error incurred by the intensity return. This greater return value in range is greater than 38-fold the maximum range walk of this camera; as photo-response non-uniformity scales multiplicatively, while range walk error is additive, deviation in measurement of photo-response non-uniformities due to range walk error diminishes with increase in range. For example, when comparing a range of 150 m to a range of 15 m, with the maximum range walk error of 2.46 m, comparable to the range resolution of the camera when operating at a bandwidth of 61 MHz, would produce an error of 610% for the 15-m range, but a relative error of 1.64% for the 150-m range. For the 150-m range, the absolute error, when biased back to the 1.18-m true range would be 1.93 cm, which is 56.3% of minimum dimension in range of a voxel for this camera with a 300-m range gate, 3.44 cm.

Fig. 1

Timing diagram for the 3D flash LiDAR system. It is noteworthy that the configuration of the system included a secondary delay for the T0 signal triggering the laser. The camera itself was internally triggered, thus the camera T0 signal was used to trigger the laser.

OE_60_2_023103_f001.png

The delay was later subtracted out in post-processing to recover the original range information. By viewing a flat field target over many smaller, 16×16 regions of interest, the illumination of the target was kept uniform, minimizing the effects of variation of the illumination on the target to range return; for instance from range walk. Also, the beam was expanded considerably before illuminating the target, from 4.6 mm to 29 cm, using a single plano-convex lens, with the region of interest’s FOV subtending just a cross-sectional area of about 0.4% of the projected beam’s area. The frame rate, while operating in synchronization with the laser, was required to be at least 9.8 kHz for external triggering of the laser (Fig. 1). 10 kHz was chosen and up to 16 kHz tested in synchronization with the laser. In contrast, within full framing mode (128×128), the camera is only operational up to 739 Hz.

Biasing settings on the camera were kept at default in the case of DFNUC, as well as for experiments involving active imaging. Framing rate was set independently; in windowing down to a 16×16 region of interest, utilization of a C++ compiled executable using Matrox Imaging Library 10 was required for frame grabbing as MATLAB produced an error in grabbing that was able to be bypassed using the compiled executable. The C++ compiled executable generated a batch of 16-bit uncompressed TIFF images as output, which were then, using a secondary MATLAB script, combined into a single MAT data file for usage in MATLAB. It should be noted that higher framing rates were capable, up to 24 kHz was tested with internal triggering independent of the laser, and up to 16-kHz frame rate was tested successfully with timing synchronization to a pulsed fiber laser, windowing down to a 16×16 region of interest.

PRNUC is necessary to correct gain error non-uniformities across the FPA. Although ideally a p-i-n detector array should uniformly have a gain of 1, various factors, such as variability in manufacturing from detector to detector, and the process of die placement, will cause variation in the gain, which can be corrected. The process for correcting this gain is potentially applicable to linear-mode avalanche photodiodes with modifications to the experimental methods and the specific application of the processes described for non-uniformity and range walk error corrections. These modifications take into account adjustable linear gain in such a camera and require for processing and correction of data across multiple gains. The data collected to characterize range walk are the same data collected for PRNUC. A later run with the same experimental setup was used to collect frames for validation purposes.

A sheet of heavy white paper was attached to a flat piece of metal to act as a flat-field, Lambertian target, as displayed in Fig. 2(b); the experimental setup is displayed in Fig. 2(a). For these experiments, we utilized a NuPhoton fiber laser, operating at 1550.12 nm central wavelength, stable TEM00 mode, with 20  μJ pulse energy, a 10 kHz pulse repetition frequency, and a beam waist of 4.6 mm. The laser was operated synchronously, triggered by the camera, through a Berkeley Nucleonics 575 model digital pulse delay generator. Camera spectral response follows the typical for InGaAs (950 to 1700 nm) with greater than 80% quantum efficiency at 1550 nm. The laser beam propagates through a half wave plate, mounted in a Zaber motorized rotation stage, and through a polarization beam splitter. The beam is then reflected by a large steering mirror and expanded with a 38-mm effective focal length lens to flood illuminate the target approximately 1.18-m downrange (Fig. 3, Table 2).

Table 2

Equipment list for this research.

EquipmentManufacturerPart number
Detector, test system
CameraVoxtelVOX3D
Illumination source
Pulsed fiber laserNuPhotonEDFL-Nano-TT-1550-2-20-200mW-COL
Receive optics
Telephoto lensEdmunds Optics#83-165
Variable attenuator
Half wave plateThorlabsWPH10M-1550
Motorized rotation stageZaberT-RSW60C-KT04U
Rotation stage controllerZaberX-MCB1
Polarization beam splitterThorlabsPBS124
Beam trapThorlabs
Control system
Digital pulse delay generatorBerkeley NucleonicsBNC-575
Beam expansion
Plano-convex lensNewportKPX079AR.18
Target
Metal plateHP
Heavy white paperGeorgia-Pacific

Fig. 2

Photographs of the (a) experimental setup and (b) target of table-top experiment for PRNUC for range return are displayed. It is noteworthy that an IR camera is used to assist with realignment of the camera when changing regions of interest.

OE_60_2_023103_f002.png

The pulse delay generator is operated to synchronize the laser and camera at 10 kHz pulse repetition frequency (PRF) and framing rate, respectively. The camera T0 is fed to the pulse delay generator and used to trigger a transistor-transistor logic (TTL) signal that is fed to the laser; this TTL signal is used to externally trigger the laser at 10 kHz. The camera was in internal triggering operation, thus directly triggering the firing of the laser; this would not have been possible if not for the low jitter of the pulse delay generator and the laser. The peak-to-peak jitter of the laser was less than 300 ps, while the jitter of the pulse delay generator in a TTL mode of operation was less than 50 ps. A delay of 1.7  μs is fed to the TTL signal triggering the laser. There is an internal camera delay of 32.5 ns, and the pulse delay generator has a jitter of 50 ps. This triggers the laser to extend the time-of-flight range to near the end of the range gate of 2.0  μs to ensure not being in the dead timing zone of the camera (Fig. 1). By delaying the initiation of a laser pulse relative to the camera’s internal timing, the detection of the pulse is subsequently delayed. Thus, when the pulse is detected, it is perceived by the time-of-flight circuitry as being further downrange. It should be noted that this has implications for additional characterization experiments using this modality, but in this paper, the method provides a way to significantly compact a typically longer-range experiment (Fig. 3).

The camera tilt was adjusted to manually scan the FOV incrementally, while keeping the focus and f-stop constant. The camera tilt was readjusted only when the illumination decreased to where detectability became a concern. Thus, several regions of interest were scanned in sequence without aiming or adjusting the camera tilt, or the FOV of the camera. In doing so, and in considering the usage of a single plano-convex lens for beam expansion, rather than an engineered diffuser, the illumination was significantly non-uniform across the full frame of the return. In processing these frames, consideration of this additional non-uniformity was required for creation of a suitable PRNUC frame.

Fig. 3

A diagram of experimental setup is displayed. The beam is expanded using a plano-convex lens through an attenuator comprised of a half wave plate housed in a motorized rotation stage, 50/50 polarization beam splitter, and beam dump; the linearly polarized laser illumination is rotated incrementally using the half wave plate, which in turn incrementally adjusts what ratio of the beam is transmitted downrange. This illumination is then steered by a large mirror to illuminate the target downrange.

OE_60_2_023103_f003.png

2.3.

Processing Corrections

The data were post-processed into a PRNUC frame using the median frame of the characterized range walk error frames. The procedure required processing of the dark frame and photo-response frame for the computation of the PRNUC frame. The data were then fully corrected for both dark-frame non-uniformity and photo-response non-uniformity in intensity and range, before being processed for the characterization of the range walk.

The correction of non-uniformities involved calculation of the PRNUC frame, Gx,y, using Eq. (2), where P is the photo-response frame [Fig. 4(b)] and D is the dark frame [Fig. 4(a)]. The process to correct a raw frame, Yx,y, to a corrected frame Cx,y, follows Eq. (3), Cx,y=(Yx,yDx,y)/Gx,y. Globally, across the FPA, there were three intensity levels captured, and fit to Eq. (4) as ERx,y(Φx,y)=ax,yΦx,ybx,y, where Φx,y is the intensity, ERx,y(Φx,y) is the range walk error, and ax,y,  bx,y are fitting parameters [Figs. 4(c) and 4(d)].14 Using the fitting parameters a and b, the true range can be recovered from a return or sequence of returns by simply computing Rx,y=Rx,y+ERx,y(Φx,y), where both Rx,y and Φx,y are the full NUC corrected range and intensity returns.

2.4.

External Quantum Efficiency

The external quantum efficiency was characterized using a sample of 1280 of the 16,384 detectors, covering five 16×16 regions of interest, by measuring the response when attenuated over 10 increments with a maximum attenuation of 0.97 and a minimum attenuation of 0.78 (Table 3). This attenuation is in addition to any transmission efficiency due to optics, which for this system should be greater than 0.9. The intensity is normalized by the dynamic range of 3966.9 photons; the resulting data were fit to a curve where f(x)=axb+c, such that a=0.3013±0.0388, b=5.782±2.000, and c=0.3224±0.0506, where a 95% confidence bounds are provided for each of a, b, and c (Fig. 5).

Table 3

Input signal attenuation and normalized intensity returns used to characterize the external quantum efficiency.

Nsig (p.u.)Nret (p.u.)
0.7800.400
0.8080.406
0.8350.423
0.8600.445
0.8830.470
0.9050.495
0.9240.517
0.9420.536
0.9570.555
0.9700.572

The response curve displayed in Fig. 5 was created from a curve fit to the 10 normalized intensity data points as described in Table 3. The external quantum efficiency was calculated by using the fitted curve to find the value of the return when the signal reaches zero and comparing this value to the dark level. The x-intercept was calculated and the bth-root of this intercept was taken to provide the quantum efficiency value of 82.29%, where b=5.782±2.000. A 95% confidence bound was provided in the above data from the curve fit.

Fig. 4

(a) The DFNUC frame for the range return (meters) (b) PRNUC frame for the range return (p.u.) are displayed. Also, the fit parameters a (c) [meters/photon] and b (d) (p.u.) for the intensity to range walk error conversion are displayed. The fit parameters convert the corrected intensity return to range walk error, thus enabling range walk error correction of the non-uniformity corrected range return.

OE_60_2_023103_f004.png

Fig. 5

Response curve of the input signal attenuation versus normalized intensity return (p.u.). The data were fitted to curve with the structure f(x)=axb+c, such that a=0.3013±0.0388, b=5.782±2.000, and c=0.3224±0.0506. Each of a, b, and c are in (p.u.).

OE_60_2_023103_f005.png

Table 4

Region of interest method, single shot range precision.

Methodσ (cm)
Uncorrected391.93
NUC72.22
Range walk33.15
Model15.65

Fig. 6

Single shot range precision modeled for the dynamic range of the VOX3D flash LiDAR camera. The range precision decreases from a maximum of 17.1 to 14.9 cm.

OE_60_2_023103_f006.png

2.5.

Range Precision Modeling

The anticipated range precision for this system was modeled to provide a comparison for experimental results. The single shot range precision, σ, was modeled from the timing jitter and timing resolution of the detector. The timing resolution,  σres2, is provided by

Eq. (6)

σres2=(VnoiseVDRΔtgate)2  ,
where Vnoise and VDR are the dark noise and dynamic range of the detector, typically in units of V, and Δtgate is the range gate, typically in units of μs. The timing resolution scales linearly with the range gate, which can vary from 150 to 600 m on the camera used in this research; this amounts to a factor of 4 improvement in range precision from using a 600-m range gate to using a 150-m range gate.

The timing jitter is added to the timing resolution to provide the timing precision. The timing jitter, σjitter2, is provided by

Eq. (7)

σjitter2=σt,ref2nrefnsig,
where σt,ref2 is the reference jitter, nref is the reference signal, and nsig is the input signal on the detector. The timing jitter scales by a factor of 1/N in proportion to the signal, a result falling into place from signal-to-noise ratio. Thus in the high signal limit, this term trends toward zero, and the timing resolution term will contribute increasingly more to the range precision, whereas in the low signal limit, this timing jitter term will dominate.

The range precision, σr, is therefore provided as the sum of the squares of these two previously mentioned terms,

Eq. (8)

σr=σjitter2+σres2,

Eq. (9)

σr=σt,ref2nrefnsig+(VnoiseVDRΔtgate)2,
where σjitter2, σres2, and their respective terms are defined previously in Eqs. (6) and (7).

An estimation of the value for the range precision can be estimated. The median signal level from the intensity returns tested was used to estimate the range precision using Eq. (9). The estimated range precision for the experiments conducted in this paper is 15.65 cm (Fig. 6). This value, in turn, will be used for comparisons with experimental results.

3.

Results and Discussion

Results of the experimental methods are presented here with a discussion on improvements in range precision and range accuracy. Improvements to state of the art for experimental methodology toward non-uniformity correction and range walk error correction in a 3D flash LiDAR camera are discussed. This section discusses the results of correcting a set of validation frames independently collected using identical methods to the data collected for corrections; the results will be discussed for both a full non-uniformity correction and additionally a range walk error correction. These results will provide evidence that the experimental methods provided here are a viable alternative.

3.1.

Results

Here, we present the results for non-uniformity correction and range walk error correction of a set of data specifically captured for validation purposes. This data were captured using identical methods and experimental setup as was used to produce the non-uniformity correction and range walk error correction; however, this data were captured independently. Because this data were captured independently, non-uniformity in illumination, and subsequently range walk, across the FPA is different than the data used in computing the corrections. We present the single shot range precision Table 4 and median range accuracy Table 5 of the uncorrected median frame Fig. 8(b) and the median frame after a full non-uniformity correction [Fig. 8(c)]. Also, we present the median range accuracy and single shot precision for the frame after range walk correction is applied to the NUC corrected frame [Fig. 8(d)].

Statistical analysis of 1250 corrected frames was performed to acquire the single shot range precision and single shot median (range accuracy). In both cases, the median and standard deviation were computed for each of the 16×16 regions of interest; the data were collected over, per individual frame for the entire sequence of data collected. Single shot range precision was computed as the standard deviation of the 16×16 windowed return in that region of interest, while range accuracy was computed as the median of the 16×16 windowed return in that region of interest; the standard deviation and median were both calculated independently for every individual frame collected, with the median of these vectors producing the final reported results. Two filtering processes were used to exclude outliers from the statistical calculations. The first filter removed return values outside of the range gate from the calculation, excluding values greater than 300 m and less than 0 m; the dark bias (Table 1) was subtracted from the uncorrected range return, and in all cases a delay of 137.33 m was subtracted from the return to account for internal and external timing delays (Fig. 1). The next removed values greater than three standard deviations from the median of the data filtered in the first step from the calculations. The first filter is calculated globally across all 1250 frames, while the subsequent filter is calculated independently for each frame; both are calculated independently for every region of interest.

The range precision was significantly improved upon application of the non-uniformity correction. The uncorrected single shot range precision was 391.9 cm, which was improved to a single shot range precision of 72.2 cm with non-uniformity correction; further correction of the range walk error reduced the single shot range precision to 33.2 cm (Fig. 7). The modeled single shot precision was calculated to be 15.65 cm (Fig. 6); additional electronic noise on the FPA caused by operating in region of interest mode on the camera is a significant contributor to the disparity between the modeled single shot range precision and the measured, calculated single shot range precision (Table 4).9

Table 5

Region of interest method, range accuracy.

Methodμ (cm)Error (%)
NUC564.3378.19
Range walk252.5113.96
True range118.00.51

The range accuracy was significantly improved when applying the range walk error correction (Table 5); the true range was measured at 118 cm (Fig. 7); relative error was estimated for the true range as 1.5 mm per measurement step, to be 6 mm total, or 0.51% relative error, er, calculated using

Eq. (10)

er=|R¯x,yR¯x,y|R¯x,y,
where is the median value of the true range and is the median value of the measured, corrected range.

The improvement in range accuracy from the non-uniformity corrected frames to the fully, range walk error corrected frames is of significance to this research. As such, and because the uncorrected frame had a bias value equal to the median of the DFNUC in range subtracted from every element which imparts some ambiguity in the median range value for uncorrected range, the statistics for range accuracy will focus on non-uniformity corrected and range walk error corrected returns, only. The median range was 564 cm or 378% range inaccuracy for the non-uniformity corrected frame. Applying the range walk correction reduced the median range across the return to 253 cm, or 114% range inaccuracy relative to the measured value of 1.18 meters (Table 5).

3.2.

Discussion

Despite using non-uniformly illuminated data captured from 16×16 regions of interest at high frame rate across the FPA, significant improvements in range precision and range accuracy are achieved. PRNUC was achieved using data collected for range walk error correction, with only minor adjustments in the processing of this data. PRNUC in intensity and range enabled range walk error correction of this data. The range precision for non-uniformity corrected validation frames showed some improvement at 72.2 cm compared to 391.9 cm for uncorrected frames; range precision was further improved to 33.2 cm when applying the range walk error correction on a set of frames collected for validation purposes (Table 4). Application of the range walk error correction caused range inaccuracy to significantly decrease (Table 5).

The methods described in this paper for collecting frame data to process into PRNUC and range walk error correction for a 3D flash LiDAR camera significantly simplifies the experimental process (Fig. 3). State of the art for non-uniformity correction focused on either using computational, scene-based methods for minimizing non-uniformities, or calibration methods that characterize the FPA. Because this is a 3D flash LiDAR FPA, calibration methods such as using an integrating sphere are not possible when seeking to characterize range return non-uniformities. Typical experiments in this case center around detector-by-detector characterization of the FPA, which is a serious and lengthy investment in time and expense. We have demonstrated a method for characterizing the non-uniformities in a 3D flash LiDAR camera with significantly less effort.

In Fig. 8, the results of the correction process are displayed for the intensity frame shown in Fig. 8(a); the non-uniformity corrected frames [Fig. 8(c)], and fully, range walk error corrected frames are additionally displayed [Fig. 8(d)]. The returns display marked progress in correction from the non-uniformity correction to the range walk error correction, both visibly notable in range precision and range accuracy. This is displayed more prevalently in Fig. 7, where the histogram of the uncorrected range return is displayed alongside the results with both corrections applied, respectively; the frames corrected of range walk using the region of interest method has a median value significantly closer to the true range, while the frames corrected of non-uniformity only are significantly further downrange, thus having greater range inaccuracy. The range precision for both corrections have significantly and visibly improved. These results show that the experimental methods for collecting non-uniformity and range walk error data provided a valid correction process that functions as anticipated.

Fig. 7

The distributions of the uncorrected range return, the return corrected of all non-uniformities, and the return corrected of all non-uniformities and range walk error are displayed. The uncorrected return has a large uncertainty (σ0=391.9  cm), therefore range precision, value associated with it, while the non-uniformity corrected (σNUC=72.2  cm), and subsequently range walk error corrected (σRW=33.15  cm) range return progressively improve in range precision.

OE_60_2_023103_f007.png

Fig. 8

Intensity return, fully corrected of non-uniformities (photons) (a), uncorrected range return [meters] (b), range return fully corrected of non-uniformities (c), range return fully corrected of non-uniformities and range walk errors (d) [meters], of validation target, captured from 16×16 WROI at 10-kHz frame rate, 1.18-m range. The range return visibly improves in both range precision and range accuracy from the uncorrected return (b), to the return fully corrected of non-uniformities (c), and finally the return fully corrected of non-uniformities and range walk error (d).

OE_60_2_023103_f008.png

4.

Conclusions

A method for simplifying the collection of data used toward non-uniformity correction and range walk error correction for a 3D flash LiDAR camera was presented; the 3D flash LiDAR camera used in this research generates both a 2D intensity return and a time-of-flight range return. This method simplifies the data collection by using range walk error data to correct PRNUC. The PRNUC in intensity and range returns are applied to the frames collected for the range walk error, before post-processing to a range walk error correction. This method significantly simplifies the experimental process allowing for the characterization of range walk error and non-uniformity in far less time than would be possible using state-of-the-art alternatives; experimental data toward correction of a 128×128 3D flash LiDAR camera can be collected using these methods within 35 h of continuous effort.

Experimental data were examined using data independently collected under identical experimental conditions. The DFNUC, PRNUC, and range walk error correction were all applied to this experimental validation data (Fig. 4). Statistical analysis was performed to determine the single shot range precision and range accuracy (Fig. 7). The single shot range precision was found to improve significantly with the uncorrected data, from a value of 392 to 72 cm when the full non-uniformity correction is applied. Once the range walk error correction was applied, the single shot range precision fell to 33 cm (Table 4). This approaches the range precision predicted by modeling of 15.7 cm (Fig. 6), but previous research has shown that using this camera in a WROI mode of operation generates additional electronic noise that may account for the disparity between the 15.7 cm modeled result and the 33 cm corrected result.9 Range accuracy was significantly improved for the return corrected for range walk errors and non-uniformities, 114%, over the return corrected for non-uniformities alone, at 378% (Table 5).

Improvements in single shot range precision fall in line with anticipated results for PRNUC in range return and range walk error correction. Range accuracy is perhaps more useful as a metric when comparing the results for a non-uniformity corrected range return and the same return further corrected for range walk errors; the range accuracy significantly improved when applying range walk error corrections to the data with NUC applied. Again, this is what would be anticipated, further validating this method. Further work will investigate non-uniformity correction and range walk error correction collected for a full frame that is uniformly illuminated using a specialized target, for significantly more rapid characterization and correction of non-uniformity and range walk error.

Acknowledgments

This work was performed in collaboration between Voxtel, Inc. and LOCI under National Aeronautics and Space Administration (NASA) Small Business Technology Transfer (STTR) Contract No. NNX16CS78C, “Highly Sensitive Flash LADAR Camera,” under the direction of Dr. Farzin Amzajerdian.

References

1. 

H. Larsson et al., “Characterization measurements of ASC FLASH 3D ladar,” Proc. SPIE, 7482 748207 (2009). https://doi.org/10.1117/12.829964 PSISDG 0277-786X Google Scholar

2. 

G. D. Hines, D. F. Pierrottet and F. Amzajerdian, “High-fidelity flash lidar model development,” Proc. SPIE, 9080 90800D (2014). https://doi.org/10.1117/12.2050677 PSISDG 0277-786X Google Scholar

3. 

I. Poberezhskiy et al., “Flash lidar performance testing—configuration and results HD lidar for Mars Lander ASC GoldenEye flash lidar overview lidar test setup test results,” Proc. SPIE, 8379 837905 (2012). https://doi.org/10.1117/12.920326 PSISDG 0277-786X Google Scholar

4. 

V. Roback et al., “Helicopter flight test of 3D imaging flash LIDAR technology for safe, autonomous, and precise planetary landing,” Proc. SPIE, 8731 87310H (2013). https://doi.org/10.1117/12.2015961 PSISDG 0277-786X Google Scholar

5. 

V. E. Roback et al., “3D flash lidar performance in flight testing on the Morpheus autonomous, rocket-propelled lander to a lunar-like hazard field,” Proc. SPIE, 9832 983209 (2016). https://doi.org/10.1117/12.2223916 PSISDG 0277-786X Google Scholar

6. 

P. F. McManamon, LiDAR Technologies and Systems, SPIE, Bellingham, WA Google Scholar

7. 

P. McManamon, Field Guide to Lidar, SPIE, Bellingham, WA (2015). Google Scholar

8. 

A. Reinhardt et al., “Dark non-uniformity correction and characterization of a 3D flash lidar camera,” Proc. SPIE, 10636 1063608 (2018). https://doi.org/10.1117/12.2302818 PSISDG 0277-786X Google Scholar

9. 

C. Bradley et al., “3D imaging with 128x128 eye safe InGaAs p-i-n lidar camera,” Proc. SPIE, 11005 1100510 (2019). https://doi.org/10.1117/12.2521981 PSISDG 0277-786X Google Scholar

10. 

G. M. Williams, “Optimization of eyesafe avalanche photodiode lidar for automobile safety and autonomous navigation systems,” Opt. Eng., 56 (3), 031224 (2017). https://doi.org/10.1117/1.OE.56.3.031224 Google Scholar

11. 

R. Costantini and S. Susstrunk, “Virtual sensor design,” Proc. SPIE, 5301 408 –419 (2004). https://doi.org/10.1117/12.525704 PSISDG 0277-786X Google Scholar

12. 

D. Anderson and H. Herman, “Experimental characterization of commercial flash ladar devices,” in Int. Conf. Sens., 3 –8 (2005). Google Scholar

13. 

B. E. Saleh and M. C. Teich, Fundamentals of Photonics, 2nd ed.Wiley(2007). Google Scholar

14. 

W. He et al., “Range walk error correction using prior modeling in photon counting 3D imaging lidar,” Proc. SPIE, 8905 89051D (2013). https://doi.org/10.1117/12.2034059 PSISDG 0277-786X Google Scholar

Biography

Andrew Reinhardt is a PhD student in the Department of Electrical Engineering at the University of Dayton, Dayton, Ohio, USA. He expects to graduate with his doctorate by August 2021. His interests include direct and coherent detection LiDAR systems, image processing and analysis, and optical systems design. He is a graduate of the MS, Physical Science Program at Marshall University, where he also earned his BS degree in physics. He is a member of SPIE.

Cullen Bradley is the research operations manager for Exciting Technology and an electro-optical researcher for the University of Dayton Research Institute in Dayton, Ohio. His research interests include lasercom, 3D LiDAR imaging, continuous beam steering, crystal growth, and beam steering efficiency modeling. He earned his MS degree in electro-optics from the University of Dayton in 2013, having previously earned a BS degree in physics from St. John Fisher College in 2010.

Anna Hecht is a master’s student at the University of Dayton, Ohio, for a degree in electrical engineering after completing her BS degree in electrical engineering in 2019. She currently works as a graduate assistant in the Department of Electro-Optics and Photonics with a research focus in lidar image processing. She expects to complete her MS degree by December 2020 with a thesis discussing non-uniformity correction of flash lidar imagery.

Paul McManamon was a chief scientist of the AFRL Sensors Directorate until he retired in 2008. He is president of Exciting Technology, Technical Director of LOCI, and chief scientist for Lyteloop. He chaired the NAS “Laser Radar” (2014), was co-chair of “Optics and Photonics” (2012), and vice chair of the 2010 Seeing Photons. He is a fellow of SPIE, IEEE, OSA, AFRL, DEPs MSS, and AIAA, and was president of SPIE in 2006.

© The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Andrew Reinhardt, Cullen Bradley, Anna Hecht, and Paul McManamon "Windowed region-of-interest non-uniformity correction and range walk error correction of a 3D flash LiDAR camera," Optical Engineering 60(2), 023103 (18 February 2021). https://doi.org/10.1117/1.OE.60.2.023103
Received: 7 November 2020; Accepted: 19 January 2021; Published: 18 February 2021
JOURNAL ARTICLE
17 PAGES


SHARE
Advertisement
Advertisement
Back to Top