Open Access
12 July 2023 Speckle decorrelation effects on motion-compensated, multi-wavelength 3D digital holography: theory and simulations
Matthias T. Banet, James R. Fienup
Author Affiliations +
Abstract

Digital holography enables 3D imagery after processing frequency-diverse stacks of 2D coherent images obtained from a chirped-frequency illuminator. To compensate for object motion or vibration, which is a common occurrence for long-range imaging, a constant temporal frequency or “pilot-tone” illuminator can act as a reference for each chirped frequency. We examine speckle decorrelation between the chirped and pilot-tone illuminators and its effect on the resultant range images. We show that speckle decorrelation between the two illuminators is more severe for facets of the object’s surface that are more highly sloped, relative to the optical axis, and that this decorrelation results in noise in the range images in the areas of the object that are highly sloped. We develop a theoretical framework along with wave-optics simulations for 3D imaging with a pilot tone, and we examine the severity of this noise as a function of several imaging parameters, including the illumination bandwidth, pulse frequency spacing, and atmospheric turbulence strength; we show that 3D sharpness metric maximization can mitigate some of the noise induced by turbulence, all in a simulated framework.

1.

Introduction

Frequency-diverse 3D imaging with digital holography (DH) is a coherent imaging technique in which the 3D Fourier space of an object is sampled and an inverse Fourier transform is used to generate a 3D image.1,2 Laser frequency diversity provides range information, and cross-range measurements of speckle fields provide angle-angle information, resulting in a fully 3D range-angle-angle image. Marron and Schroeder1 demonstrated this process with lensless imaging and phase-shifting DH, and Shirley and Hallerman2 explained the Fourier relationship in detail in an instructive paper on 3D imaging using a reference point in the image field of view. Frequency-diverse 3D imaging, also referred to as holographic laser radar, requires the detection of 2D coherent, cross-range information (in an image, pupil, or other plane) at different temporal frequencies. One can modulate the illumination laser frequency continuously or in a pulsed fashion such that a stack of frequency-diverse 2D coherent data is obtained over the duration of the chirp or frequency “ramp,” and we simulate the pulsed case here. One potential limitation of frequency-diverse 3D imaging is object motion over the duration of the ramp. Krause et al.3 modified frequency-diverse 3D imaging by including a second illuminator, or “pilot tone,” that acts as a reference for the chirped frequencies, and later Krause4 patented a pulsed version of the system. This method has been demonstrated to significantly improve 3D and range image quality in the presence of object motion and/or object vibration, and it is the focus of the simulations in this paper.

We first consider the case without the pilot tone. Speckle correlation between pulses within a ramp is the key driving phenomenon that determines whether or not an adequate 3D image can be formed by 3D imaging techniques without a pilot tone, which we will refer to as “conventional, multi-wavelength 3D imaging.” In this case, we describe an imaging system that consists of a frequency-chirped laser illuminator (which samples the temporal frequency dimension of the object’s Fourier space) and a finite-width pupil that collects the scattered return from the object (which samples the Fourier-angle space for a given temporal frequency). To form a 3D image without the pilot tone, there needs to be a fixed phase relationship between the speckle patterns in the pupil plane for adjacent frequency pulses, and, in particular, the speckle patterns associated with adjacent pulses in the pulse train must be correlated to some degree (i.e., they cannot be independent speckle realizations).

There are multiple mechanisms by which the speckle in the pupil plane can decorrelate from pulse-to-pulse. Object motion and/or vibration can cause speckle decorrelation. In addition, speckle can vary as a function of illumination frequency. Goodman5 describes two methods by which speckle decorrelation occurs with changing frequency. The first is due to the interaction between the surface roughness of an object and varying illumination frequency. For the scenarios studied here, this variation of the phase is negligible across the narrow fractional bandwidths that we employ. Goodman5 states that, when the absolute difference between two frequencies exceeds c/(2σh), where c is the speed of light in vacuum and σh is the standard deviation of the surface height, complete speckle decorrelation is achieved. For the simulations here, we employ illumination bandwidths in the tens of GHz and assume surface roughness standard deviations around 100  μm, for which no significant speckle decorrelation will occur across the bandwidth due to the interaction of the surface roughness and varying frequencies. The second method by which speckle varies with frequency is by spatial scaling of the speckle pattern as a result of diffraction angles changing with frequency. This second method is relevant to 3D imaging of any object with highly sloped facets relative to the optical axis, as will be described in Sec. 2. For the simulations in this paper, we ignore constant-velocity object motion along the optical axis, for which Doppler effects must be taken into consideration.

Figure 1 shows (a) an example reflectance map of a simulated, scaled truck object and (b) the depth profile of the truck object. One can use reflectance maps and depth profiles to simulate the 3D imaging modalities described here. In the case of the truck object, the reflectance map and depth profile were generated from a computer aided design model of a pickup truck viewed from a slant angle, which was provided by MZA Associates Corporation. We created the reflectance map by treating the truck as a Lambertian reflector such that reflectance goes as cos(θ), where θ is the angle between the surface normal of an object facet and the optical axis, and we created the depth profile by measuring the distance to the object from an arbitrary reference plane with a surface normal that is parallel to the optical axis. We also examined a simple rectangular plate object in a simulation. For each object, we imported the reflectance map and depth profile into MATLAB (or created them in MATLAB for the case of the rectangular plate) and scaled them according to the propagation geometry. For the remainder of the paper, 3D images of the truck object and the rectangular plate are simulated to act as a qualitative gauge of performance.

Fig. 1

(a) True reflectance map of the scaled truck object on a uniform sloped background (arbitrary units). (b) True depth map of the object (in units of meters).

OE_62_7_073103_f001.png

In what follows, we first summarize the pros and cons between four different 3D imaging cases that are particularly illustrative in Fig. 2. The columns of Fig. 2 show the conventional 3D imaging case in which the pilot tone is off and the motion-compensated 3D imaging case in which the pilot tone is on. The rows show a case in which there is no object motion (and hence no speckle decorrelation due to object motion) and a case in which the object is moving/vibrating such that the speckle patterns are completely independent from pulse-to-pulse. Case 1, in the top left, with pilot tone OFF and motion OFF, was simulated in Refs. 6 and 7. Here, both 2D and 3D irradiance images have speckle noise due to the lack of object motion, range images are robust to speckle and scintillation,8 and range images have somewhat of a “tiled” appearance due to speckle noise, which is discussed later. Case 2, in the top right, is nearly identical to Case 1 except that the addition of the pilot tone reduces the signal-to-noise ratio (SNR) somewhat via multiplexed digital holography,9 which is also described later in the paper. The images are otherwise identical to Case 1. Case 3, with motion ON and pilot tone OFF, in the bottom-left, features relatively speckle-free 2D irradiance images due to object motion, but 3D imaging and range imaging are impossible due to speckle decorrelation in this case.

Fig. 2

Grid diagram showing the outcomes of 3D DH with and without the pilot tone and with and without object motion.

OE_62_7_073103_f002.png

Case 4, which is in the bottom-right of Fig. 2 and is the focus of this paper, describes the case in which the pilot tone is on and object motion causes speckle decorrelation from pulse-to-pulse. Here, all irradiance images are relatively speckle-free due to object motion, but 3D and range imaging are again possible thanks to the addition of the pilot tone. The main downside of this case is noise in the range images, which we refer to as range chatter, that appears specifically over the highly sloped facets of the object. This previously unrecognized noise appears in the imagery even when there is no detector noise present. This paper addresses both the causes of this noise and the trends of the noise with various imaging parameters.

In brief, this paper serves (1) to develop an in-depth theoretical framework of 3D imaging in general and (2) to better understand the underlying causes of the noise present in Case 4, mentioned previously, in both theory and simulation. We used digital wave-optics simulations in MATLAB to study both conventional and motion-compensated 3D imaging in a variety of conditions that are pertinent to long-range imaging of objects in motion. In the remainder of this paper, we describe the theory behind conventional 3D imaging, speckle decorrelation, and motion-compensated 3D imaging with a pilot tone in Sec. 2, with supplementary analysis in the Appendix, explore the theory with simulations in Sec. 3, and show the results of a trade study in Sec. 4. Finally, we conclude the paper and discuss the relevant implications in Sec. 5. This study is an extension of one described in Ref. 10. Here, we build on the previous study by fleshing out the theoretical details and adding some more quantitative results, including some image sharpening experiments.

2.

Theory

2.1.

3D Imaging Parameters

We first introduce some 3D imaging parameters that we use throughout the remainder of this paper. To visualize the 3D images, we usually do not visualize the 3D images themselves, which is difficult with our 2D displays, and instead visualize two types of 2D images. The first is the frequency-averaged 2D irradiance, I2D(u,v), given by

Eq. (1)

I2D(u,v)=1Nn|Ui(u,v;νn)|2,
where (u,v) are the transverse image plane coordinates, N is the number of illumination frequencies, Ui(u,v;νn) is a complex-valued 2D image of the object for a frequency, νn, and n is the index of the frequency. The sum in Eq. (1) can allow speckle averaging to occur in the irradiance, depending on the degree of speckle correlation over the stack of Ui(u,v;νn) images.

The second type of image that we display requires the generation of the 3D image itself, which is formed by taking a Fourier transform of Ui(u,v;νn) over frequency-index coordinate, n, via

Eq. (2)

U3D(u,v;z)=Fνn{Ui(u,v;νn)},
where z is the relative range coordinate of the image, U3D(u,v;z) is the complex-valued 3D image, and Fνn represents a Fourier transform over frequency-index coordinate, n. From here, we generate the 3D irradiance image as

Eq. (3)

I3D(u,v;z)=|U3D(u,v;z)|2,
and a range image, R(u,v), via

Eq. (4)

R(u,v)=argmaxz{I3D(u,v;z)},
which is formed by determining the z location of maximum irradiance for each (u,v). We use range images to gauge the 3D imaging process along with the 2D frequency-averaged irradiance images, although we also have access to the 3D images from Eqs. (2) and (3). Ideally, the 2D frequency-averaged irradiance images mimic reflectances like the one shown in Fig. 1(a), and range images mimic depth profiles like the one shown in Fig. 1(b).

Due to the Fourier transform relationship between frequency and range that is employed in Eq. (2), the range resolution, δz, is determined by the bandwidth, Δν=νmaxνmin, via

Eq. (5)

δz=c2Δν.

This finite resolution is due to the reciprocal nature of Fourier transforms and the limited frequency bandwidth. Similarly, the discrete nature of the frequency pulses means that the 3D irradiance images, I3D(u,v;z), repeat themselves in the range dimension at regular intervals. This interval, known as the range ambiguity interval, Δz, is given by

Eq. (6)

Δz=c2δν,
where δν is the interval between frequency samples. The range images similarly exhibit values that are modulo Δz.

2.2.

Conventional, Multi-Wavelength 3D Imaging

Here, we describe the basic theory behind conventional 3D imaging. In this setup, coherent images of a distant object are collected via DH. The exact method by which the digital holograms are generated can vary, but the process allows for the generation of complex-valued fields in an image plane via Fourier processing techniques on the hologram data.1,1113 In the case of 3D digital holography, one obtains a stack of 2D coherent images of the object with different illumination frequencies. Note that the theory here is applicable for long-range imaging with small fractional bandwidths, but the mathematical description provided here could provide insight into how to modify the equations for applications with shorter propagation distances, e.g., microscopy. We describe the 3D image formation process as follows. In general, we have some 3D object denoted by an amplitude reflectance function, Uo(ξ,η;z), where (ξ,η) are the transverse coordinates and z is the range coordinate. Assuming on-axis plane-wave illumination, the reflected field is proportional to Uo(ξ,η;z), so we propagate that field by propagating the reflectance function. After illuminating the object with some general illumination frequency, ν, we collect the scattered light with an aperture some gross distance, z0, from the object. This aperture effectively samples transverse spatial frequencies of the object, whereas the varied temporal frequencies sample the spatial frequencies in the range dimension. We write the complex field in the pupil plane as

Eq. (7)

Up(x,y)=c1ei2kziλzexp[iπλz(x2+y2)]×Uo(ξ,η;z)exp[iπλz(ξ2+η2)]exp[i2πλz(xξ+yη)]dξdηdz,
where Up(x,y) is the 2D field in the pupil plane with transverse coordinates (x,y), k is the wavenumber of the source, λ is the wavelength of the source, and c1 is a constant to ensure that Up(x,y) has units of an optical field. Equation (7) is a Fresnel diffraction integral embedded within an integral over z. The field in the pupil plane is the sum of the Fresnel propagations for each relevant z-plane of the object. Note that the Fresnel diffraction integral can be replaced by any 2D propagation integral (e.g., Fraunhofer propagation, angular-spectrum propagation, etc.). Any propagator will have a leading term of exp(i2kz), where the 2 is the result of the round-trip path of the light, assuming a near-monostatic configuration. We make some simplifications to Eq. (7) to make it more digestible. First, we define z=z0+z, where z0 is the distance from a reference plane in the 3D object to the pupil and z is the fine range coordinate, and perform a change of variables. For context, in the scenarios that motivated this work, z0 might be one to several kilometers and the depth of the object might range from one to several meters. Second, we approximate z in Eq. (7) with just z0 for all but the exp(i2kz) term in Eq. (7) because they vary negligibly with z for long-range imaging of objects with depths that are much less than z0. This approximation is valid in the scenarios studied here, but not in the general case. This yields

Eq. (8)

Up(x,y)c1ei2kzei2kz0iλz0exp[iπλz0(x2+y2)]×Uo(ξ,η;z)exp[iπλz0(ξ2+η2)]exp[i2πλz0(xξ+yη)]dξdηdz.

The only terms with a dependence on z that matter are the leading exp(i2kz) term (because this will vary wildly between different z-planes) and the object term Uo(ξ,η;z). Going further, we assume that the object is opaque, and thus, for a given point (ξ,η), light will only reflect from one z plane. As such, we write

Eq. (9)

Uo(ξ,η;z)=c2Uo,(ξ,η)δ[zZ(ξ,η)],
where Uo,(ξ,η) is the transverse amplitude reflectance function of the object, Z(ξ,η) is the 2D depth profile of the object, and c2 is a constant that ensures that Uo(ξ,η;z) has the right units given the presence of the delta function. Replacing Uo(ξ,η;z) in Eq. (8) with Eq. (9), exchanging the order of integration, and integrating over z gives

Eq. (10)

Up(x,y)c1c2ei2kz0iλz0exp[iπλz0(x2+y2)]Uo,(ξ,η)ei2kZ(ξ,η)×exp[iπλz0(ξ2+η2)]exp[i2πλz0(xξ+yη)]dξdη.

At this point, we recognize that this is a 2D Fresnel transform of Uo,(ξ,η)ei2kZ(ξ,η). To simplify things, we drop the constant factor c1c2, as it is a global constant factor, and write the Fresnel diffraction integral as a general 2D propagator P(Uo(ξ,η);z0), which takes some input field Uo(ξ,η) and propagates a distance z0. This modification reflects the fact that we could have used any 2D propagation integral to begin with, and this gives

Eq. (11)

Up(x,y)ei2kz0P[Uo,(ξ,η)ei2kZ(ξ,η);z0].

The term inside the propagator in Eq. (11) is the product of the 2D amplitude reflectance function of the object and a phase term that contains depth information. Normally one does not concern oneself with the global phase factor exp(i2kz0) in front of the propagator, but as we now vary the temporal frequency of the source, ν, and care about the resulting phase change, this term must remain. Making the ν dependency explicit,

Eq. (12)

Up(x,y;ν)exp(i4πνz0/c)P{Uo,(ξ,η)exp[i4πνZ(ξ,η)/c];z0},
where k=2πν/c for propagation through the atmosphere where the index of refraction is near unity. Note that we assume that the function Uo,(ξ,η) is independent of ν and that the propagator, P, is as well because we assume a tiny fractional bandwidth. This means that, for the Fresnel propagator, for example, [except the exp(i2kz0) term], all instances of λ can be replaced by λ0, the mean wavelength of the illumination bandwidth, for similar reasons as described above when approximating z as z0. For context, the scenarios that motivated this work have (λmaxλmin)/λ0105, where λmin and λmax are the minimum and maximum illumination wavelengths in the bandwidth, respectively, which validates replacing λ with λ0.

To form an image, we apply an aperture function, A(x,y), to Up(x,y;ν) and propagate the resultant field to an image plane. After this point, the analysis is identical to conventional 2D coherent imaging, so we denote the 2D coherent image for each frequency, ν, as

Eq. (13)

Ui(u,v;ν)=exp(i4πνz0/c)h(u,v)*{Uo,(u,v)exp[i4πνZ(u,v)/c]},
where h(u,v) is the coherent impulse response function of the imaging system, which we assume to be independent of frequency over our narrow bandwidth and proportional to a Fourier transform of A(x,y), and the in-line asterisk, *, represents a 2D convolution in the transverse coordinates.

Now, we examine the method by which the temporal frequency, ν, of the illumination laser is adjusted, which allows us to sample the spatial frequencies of the range dimension. We illuminate the object with discrete laser pulses within a narrow bandwidth that has a temporal frequency that increases linearly, or is chirped, in time. As such, we define νn=ν0+nδν, where ν0 is the average temporal frequency over the bandwidth and integer n is the index of the pulse. The pulse index is contained in the interval [N/2,N/21], where N is the total number of pulses (assumed to be an even number), such that the total bandwidth Δν is given by Nδν. We also break the depth profile of the object into two components: (1) the coarse depth profile, Zd(ξ,η), of the object that we desire in the final range image and (2) the microscopic surface roughness of the object, Zr,n(ξ,η), which causes speckle in the irradiance images. We denote the total depth profile (in object space) as the sum

Eq. (14)

Zn(ξ,η)=Zd(ξ,η)+Zr,n(ξ,η),
where the subscript n in Zr,n(ξ,η) informs us that the surface roughness profile can change over the image collection time, and we add a dependence on n to the total depth profile. Now we replace Z(ξ,η) and ν in Eq. (13) with their more detailed forms, yielding

Eq. (15)

Ui(u,v;νn)=ei4πνnz0/ch(u,v)*{Uo,(u,v)exp[i4π(ν0+nδν)Zd(u,v)/c]×exp[i4π(ν0+nδν)Zr,n(u,v)/c]}.

Note that the product of (ν0+nδν) and Zr,n(u,v) in the final exponential term yields a phase term equal to 4πnδνZr,n(u,v)/c, which, as alluded to in the introduction, is on the order of milliradians of phase and is considered negligible for the bandwidths and surface roughness standard deviations assumed here.5,7 One can use Eq. (15) to simulate 3D images by writing Uo,(u,v) as the square root of the object reflectance map and Zd(u,v) as the depth profile, examples of which are shown in Fig. 1. Going further, if the coarse depth profile Zd(ξ,η) varies slowly over the width of h(u,v) (a specification that we revisit later), we can remove the exponential term containing Zd(ξ,η) from inside the convolution, leaving

Eq. (16)

Ui(u,v;νn)=ei4πνnz0/cexp[i4π(ν0+nδν)Zd(u,v)/c]×(h(u,v)*{Uo,(u,v)exp[i4πν0Zr,n(u,v)/c]}).

At this point, we examine Eq. (16) to understand how to form a 3D image from a stack of images, Ui(u,v;νn), with a varied source frequency. First, consider the exponential phase term containing the coarse depth profile, Zd(u,v). For a given image point (transverse coordinates), (u,v), this term contains discrete phase terms that are linear in n across the stack of images, and the slope of this linear term is proportional to Zd(u,v), the coarse depth profile of the object. By this, one can see how a 1D Fourier transform of Ui(u,v;νn) along the n dimension yields a 3D coherent image: because a 1D Fourier transform over frequency-index coordinate, n, acts on every image point (u,v), the phase terms that are linear in frequency shift, in the z direction, the image at each (u,v) proportionally to Zd(u,v), giving us the z-profile of the coherent 3D image. Reference 14 similarly describes the Fourier transform relationship between the optical frequency and range for a single transverse pixel. Here, however, the presence of linear phase terms in the optical frequency domain that are proportional to Zd(u,v) for each (u,v) is made explicit through the above theoretical analysis.

2.3.

Speckle Decorrelation Mechanisms

The exponential phase term containing Zr,n(u,v) in Eq. (16) cannot be removed from the convolution with h(u,v) because the surface roughness of the object varies rapidly over the width of the convolutional kernel, and this convolution with a random-phase function is what generates speckle in each 2D coherent image, Ui(u,v;νn). To form a proper 3D image, we want this phase term to vary negligibly with frequency index, n. In general, we desire the speckle phase—the phase that results from the convolution of h(u,v) with the exponential capturing the surface roughness—to change negligibly with the frequency index, n, i.e., we want Zr,n(u,v) to remain constant over the image collection time. A speckle phase that varies with n will corrupt the linear phase term described above for each (u,v) as this linear phase term is what allows for the formation of the z-profile of the 3D image.

At this point, we have determined that the interaction between the surface roughness of the object and the variable frequency of the illumination source yields negligible changes in the speckles across the bandwidth. However, the speckle can vary over the bandwidth via two other mechanisms: (1) transverse object motion, rotation, or vibration over the image collection time and (2) the diffraction angles changing with frequency, which is referred to in Refs. 2 and 15.

The first mechanism is well-known5 and is described in detail for a variety of types of object motion by Burrell et al.16,17 Stationary objects will provide a common speckle pattern across all of the 2D images in Ui(u,v;νn) such that the linear phase terms that depend on Zd(u,v) are preserved. By contrast, if object motion is severe enough to provide independent speckle patterns for each Ui(u,v;νn), then a 3D image is impossible to generate. For example, given a propagation distance of 1 km, an aperture diameter of 30 cm, and a pulse repetition frequency of 1 KHz, an object rotating faster than 8.6 deg per second will cause the speckle patterns from pulse-to-pulse to be completely uncorrelated, and no useful range image can be formed using conventional 3D imaging. The speckle patterns will decorrelate more quickly for larger propagation distances, so one can see how objects that move or vibrate over the image collection time can cause enough speckle decorrelation for range images to be severely degraded, especially for longer propagation distances.

We examine the second mechanism by which the speckle phase can decorrelate across the stack of 2D images Ui(u,v;νn). For a first illustration, consider an object facet with a linear coarse depth profile such that

Eq. (17)

Zd(ξ,η)=αξ
over the extent of the object facet, where α is the slope of the facet along the ξ direction. Here, α=tan(θ), where θ is the tilt angle between the object’s surface normal and the optical axis. Consider the effects that this slope has on the field in the pupil for two different frequencies, ν1 and ν2, with frequency difference ν1ν2=Δν1,2. Let us examine the phase imparted to the exponential term inside the propagator in Eq. (12). Because Z(ξ,η) contains Zd(ξ,η), there is a linear phase term of the form

Eq. (18)

ϕd=2kαξ=4πναξ/c.

This linear phase term will change from frequency to frequency because of the dependence on ν in Eq. (18), so we examine the phase difference between two frequencies, ν1 and ν2, as

Eq. (19)

Δϕd=4πΔν1,2αξ/c.

The phase difference between the two frequencies, which is also linear in ξ, manifests as a relative shift between the speckle patterns in the pupil plane associated with each frequency via the Fourier shift theorem. We use the shift theorem to calculate the frequency separation Δν1,2 required to separate the pupil-plane speckle patterns from one another by some distance, Δx, as follows. The shift theorem gives

Eq. (20)

4πΔν1,2αξ/c=2πΔxfx,
where fx=ξ/(λ0z0) is a spatial frequency coordinate and λ0 is the average wavelength associated with ν1 and ν2. This gives

Eq. (21)

4πΔν1,2αξ/c=2πΔxξλ0z0,
and after rearranging and substituting ν0c/λ0, it yields a term that we define as the slope-bandwidth product (SBP), given by

Eq. (22)

SBP=αΔν1,2ν0=Δx2z0.

We chose the form of Eq. (22) specifically because the right side is the half-angular subtense of the separation, Δx, when viewed from the object, and the left side is the product of the slope of the object and the fractional bandwidth of the two frequencies analyzed—or the SBP. This term on the left side determines much of the speckle correlation behavior for 3D imaging. For example, if the magnitude of the SBP exceeds half the angular subtense of the aperture, then the speckle patterns associated with ν1 and ν2 will be completely decorrelated. Put into the context of 3D imaging, if the SBP associated with the frequencies at the edges of the bandwidth (such that |Δν1,2|=Δν) exceeds half the angular subtense of the aperture, then the speckle patterns at the edges of the bandwidth will be decorrelated from one another, but speckle patterns from smaller differences in frequencies will have some degree of correlation with one another. Inserting these values into Eq. (22), we find, for an aperture of diameter D, that the facet’s slope magnitude, αm, at which the speckles at the edges of the bandwidth completely decorrelate is given by

Eq. (23)

αmΔνν0=D2z0αm=c2ΔνDλ0z0.

This is a case in which the slope magnitude is equal to the range resolution, c/(2Δν), divided by the transverse resolution, λ0z0/D. Here, the resulting range images exhibit a “tiled” appearance in which the range reports are constant over the width of each speckle (as pertains to Cases 1 and 2 in Fig. 2). In this case, the range image formation process [see Eq. (4)] selects the location of maximum irradiance to be the range at each transverse pixel, and, when the slope magnitude equals αm, the range image formation process will report the brightest location in z of each speckle as the range over the speckle’s entire transverse width, resulting in a tiled appearance. We refer to this as a “range-resolved” case because the object depth within a transverse resolution element is equal to the range resolution.

Next, consider a case for which the SBP for adjacent frequencies within the ramp exceeds half the angular subtense of the aperture, i.e.,

Eq. (24)

αrδνν0=D2z0αr=c2δνDλ0z0,
where, here, the slope magnitude of the facet, αr, is equal to the range ambiguity interval, c/(2δν), divided by the transverse resolution width. For this case, the speckle patterns for each frequency are completely independent, and the resulting range image takes on random values within the range ambiguity interval from speckle-to-speckle, which is to say that it is completely noise. Equation (24) describes a case in which the pupil-plane speckle patterns associated with each frequency are completely decorrelated, even if the object is stationary. In practice, the slopes required to achieve the equivalence in Eq. (24) are quite large (αr1 for the transverse resolutions and range ambiguities simulated here), and they usually correspond to image regions that will have a low signal return due to the large slopes (e.g., for a Lambertian reflector).

It is worth mentioning that the derivation of Eq. (22) does not examine the entire picture. The more complete description of these speckle patterns shifting with frequency comes from Goodman,5 who describes that speckle patterns actually contract/expand about a specific angle (the mirror reflection angle) when changing the illumination frequency instead of just translating. This manifests as a translation in our case and validates Eq. (22), which is shown in the Appendix.

2.4.

Motion-Compensated, Multi-Wavelength 3D Imaging

We now describe motion-compensated 3D imaging in which the addition of a second illuminator allows for the negation of one of the methods of speckle decorrelation described above. Recall that, for 3D imaging to be successful, we need a fixed, linearly-varying, phase relationship in the νn dimension for each transverse point across the stack of 2D images Ui(u,v;νn). If the speckle phase varies enough from image-to-image, then this phase relationship is destroyed and a 3D image cannot be formed. The addition of a second illuminator, called a pilot tone, compensates for object motion4 and ensures that the linear phase relationship in νn is preserved even if the object motion induces complete speckle decorrelation from pulse-to-pulse.

The pilot tone works as follows. Every time an illuminating pulse is sent from the chirped illuminator, a pilot-tone pulse is sent at the exact same time and from the same transmitter but with an unchanging frequency.4 This is illustrated in Fig. 3: (a) shows the chirped pulse frequency as a function of time for conventional 3D imaging, and (b) shows the chirped pulse and pilot pulse frequencies as a function of time. The key here is that conventional 3D imaging requires some correlation of the speckle phases between adjacent pulses but that 3D imaging with the pilot tone requires some correlation of the speckle phases between the two illumination frequencies within a pulse pair. The latter requirement is much more realizable if the two illuminator transmitters are co-located in space and time. Because the frequency of the pilot tone does not vary with time, the resulting images formed from the pilot tone are a special case of Eq. (16) with νn=ν0+nδν replaced with νp, giving

Eq. (25)

Upilot,n(u,v;νp)=exp[i4πνpz0/c]exp[i4πνpZd(u,v)/c]×(h(u,v)*{Uo(u,v)exp[i4πνpZr,n(u,v)/c]}).

Fig. 3

Diagram that illustrates the pulse train for (a) conventional 3D imaging and (b) motion-compensated 3D imaging with a pilot tone. For (a), some degree of speckle correlation between adjacent pulses is required. Here, decorrelation can occur via object motion and/or via the SBP because adjacent pulses differ in both time and frequency. For (b), some degree of correlation between the chirped illuminator and the pilot tone within each pulse-pair is required. Here decorrelation can occur via the SBP but not via object motion because each pulse within a pulse pair is emitted from the same transmitter at the same time.

OE_62_7_073103_f003.png

For each frequency, the complex conjugate of the pilot-tone images is taken and multiplied by the images from the chirped illuminator, yielding a stack of conjugate-product images as

Eq. (26)

E(u,v;νn)=Upilot,n*(u,v;νp)Ui(u,v;νn).

Recall that we do not expect the speckle phase to vary much between the pilot tone and the chirped illuminator for a given frequency, νn, which means that the exponential terms inside the convolutions in Eqs. (16) and (25) are nearly identical. Thus we can write

Eq. (27)

E(u,v;νn)ei4π(νnνp)z0/cexp[i4π(ν0+nδννp)Zd(u,v)/c]×|h(u,v)*{Uo(u,v)exp[i4πν0Zr,n(u,v)/c]}|2,
where we replaced the frequency, νn, dependence in the exponential term inside the convolution with ν0. Equation (27) contains the leading phase terms that are linear in frequency for each (u,v) and a real-valued, non-negative term given by the modulus-squared operation. Thus, even if Zr,n(u,v) varies greatly from pulse-pair to pulse-pair, the linear phase terms in frequency, which allow for the extraction of Zd(u,v) and 3D image generation to occur, are preserved. To form a 3D image from E(u,v;νn), the same steps that are used for Ui(u,v;νn) should be followed, i.e., a Fourier transform over frequency. From there, a range image and 3D irradiance image are formed via the same steps given in Eqs. (3) and (4).

Examining Eq. (27) further, there is no speckle phase in E(u,v;νn), and instead the phase at each image point is proportional to Zd(u,v) and the frequency difference between νn and νp. The amplitude is proportional to the irradiance of the 2D coherent image for each νn. Using the pilot tone makes the 3D imaging process robust against speckle decorrelation due to object motion/vibration; however, the second method of speckle decorrelation described above—decorrelation via the SBP—still affects 3D image generation. Imagine again a facet of the object with a linearly-varying depth profile. For a given chirped frequency, νn, and pilot-tone frequency, νp, the speckle patterns associated with the chirped illuminator and the pilot tone will be completely decorrelated when the SBP exceeds half the angular subtense of the aperture or, from Eq. (23),

Eq. (28)

|ανnνpν0|>D2z0.

When this is the case, we can no longer expect the speckle phases in the images Ui(u,v;νn) and Upilot,n(u,v;νp) to cancel as they did in the formation of E(u,v;νn) in Eq. (27). Thus, it is advantageous to keep the absolute difference between νn and νp as small as possible across the bandwidth; hence νp should be chosen to be near the center of the bandwidth. In this configuration, the speckle patterns associated with the chirped illuminator and pilot tone become completely decorrelated for pulses at the ends of the bandwidth when

Eq. (29)

|α|Δν/2ν0>D2z0|α|>Dν0z0Δν=αc.

The parameter αc in Eq. (29) is a critical slope at which we expect to see errors in the range images over object facets with a slope magnitude that exceeds αc.

To summarize, there are two pertinent speckle decorrelation mechanisms that appear for 3D imaging with narrow optical bandwidths: (1) speckle decorrelation due to object motion and (2) speckle decorrelation due to the SBP. Referring to Fig. 2, Cases 1 and 3 (which correspond to conventional 3D imaging) are susceptible to the effects from both methods of speckle decorrelation. Cases 2 and 4 (which correspond to 3D imaging with a pilot tone) are highly robust to speckle decorrelation due to object motion, but can exhibit negative consequences as a result of speckle decorrelation due to the SBP. The effects of speckle decorrelation are of interest for Cases 1 and 4, in particular; as such, Sec. 3 shows qualitative speckle decorrelation effects for Cases 1 and 4 as well as qualitative effects of turbulence for Case 4. Section 4 quantifies the effects of speckle decorrelation due to the SBP and turbulence for Case 4 (motion-compensated 3D imaging with a pilot tone), which has been unexplored in the current literature.

3.

Simulation Exploration

To explore the theory presented in Sec. 2, we used wave-optics simulations to generate 3D images conventionally and with the inclusion of a pilot tone. Here, we used the angular-spectrum propagation method to propagate M×M object fields from the object to the pupil plane over a propagation distance of z0=2  km. The aperture was circular with diameter D=30  cm, and the average wavelength was λ0=1.5  μm. We examined two objects: the first was the truck object shown in Fig. 1, with the illuminated scene containing the truck object being 0.5 m in the transverse extent. The second object was a rectangular plate with a maximum transverse extent of 0.24 m. Regardless of the object, the zero-padded object array (computational window) side length, S1, was always 1 m (twice the maximum extent of the object), and the pupil array side length, S2, obeyed

Eq. (30)

αmaxΔνν0<S2D2z0S2>2αmaxΔνν0z0+D,
where αmax was the absolute maximum slope of a facet for a given object scene. Equation (30) ensured that the speckle fields never aliased over the extent of the bandwidths used here, which ranged from 10 to 30 GHz. The number of pixels across the fields, M, was determined by satisfying the critical sampling of the angular spectrum transfer function (also known as Fresnel scaling) by setting18

Eq. (31)

M=S1S2λ0z0.

To form image stacks, Ui(u,v;νn) and Upilot,n(u,v;νp), we applied an aperture function to the fields in the pupil plane and used DH in the off-axis image plane recording geometry (IPRG)11 to gather estimates of Ui(u,v;νn) and Upilot,n(u,v;νp). In general, DH is useful for coherent imaging because it provides access to complex-valued field data and because the use of strong reference beams allows for approaching a shot-noise limited detection regime.11 Many other digital holographic recording schemes could be used,12,13 but we chose the off-axis IPRG due to its straight-forward processing of the holograms in the Fourier domain.19,20 To obtain estimates of both Ui(u,v;νn) and Upilot,n(u,v;νp), we used multiplexed digital holography,9,1921 which allowed both signals to be recorded in a single hologram for each frequency, νn, at the cost of a reduction in SNR proportional to the number of signals in each hologram.9 We set νp=ν0+δν/2 so that the signals Ui(u,v;νn) and Upilot,n(u,v;νp) never interfered with each other and instead only interfered with their respective reference beams. To simplify the simulations and to examine the fundamental causes of range chatter, no noise was added to the holograms.

Figure 4 shows the approximately monostatic, motion-compensated 3D imaging system simulated here. Light from two laser sources (chirped illuminator and pilot tone) entered a common transmitter that flood illuminated the distant object. The light from each source scattered off the object and was collected by a common receiver that imaged the light from the object onto a camera, where each signal interfered with its respective reference beam. Here, we used multiplexing to encode the signals from each laser source onto a single hologram for each pulse pair.

Fig. 4

Diagram of motion-compensated 3D imaging with a pilot tone simulated in this paper.

OE_62_7_073103_f004.png

This study also contains some simulations that include atmospheric turbulence and sharpness metric maximization (SMM) as a method of correcting images aberrated by turbulence. We simulated atmospheric turbulence by placing Kolomogorov phase screens in the pupil of the imaging system with variable D/r0, which is a gauge for the strength of turbulence.22 We generated the phase screens via the method laid out by Lane et al.23 with added subharmonics. The sharpness metric of choice for these experiments was given by

Eq. (32)

S=u,v;zI3Dβ(u,v;z),
where β is the sharpness exponent.24

The SMM algorithm was nearly identical to the one laid out by Banet et al.7 with some modifications made to allow for inclusion of the pilot tone. There were two main changes to the algorithm: (1) the conjugate product step, Eq. (26), was added to the forward and reverse models and (2) the sharpness exponent was changed to β=0.05. Here, the reverse model was used to calculate analytic gradients to feed to the optimizer, which was the limited-memory Broyden Fletcher Goldfarb Shanno algorithm,25 via reverse-mode algorithmic differentiation.26

Thurman and Fienup27 showed that the optimal β value for 2D SMM given a single speckle realization was 0.5, and Banet et al.7 used an optimal value of 0.88 for 3D SMM given a highly speckled 3D irradiance. For motion-compensated 3D imaging with the pilot tone, one would expect the optimal β value to be roughly half the optimal β value for conventional 3D imaging, due to the fact that I3D(u,v;z) is the modulus-squared of the Fourier transform of E(u,v;νn) over frequency, in this case. The amplitude of the stack of conjugate-product images, E(u,v;νn), is proportional to the modulus-squared of the optical field (similar to an irradiance), so I3D(u,v;z) is proportional to the squared irradiance. Thus, one would expect the optimal β values to decrease by a factor of 2. We performed a simulation study with pilot-tone imaging that examined β values from 0.05 to 2 and found that β0.25 provided the lowest mean-squared errors for speckled imagery and that β0.05 provided the lowest mean-squared errors for speckle-free imagery. β=0.05 also provided relatively low errors for speckled imagery, so we decided to use β=0.05 for all of the simulations here.

3.1.

Qualitative Effects of Speckle Decorrelation on Sloped Objects: Cases 1 and 4

We explored the qualitative effects of conventional versus motion-compensated 3D imaging, specifically when observing sloped objects, i.e., Case 1 versus Case 4 of Fig. 2. Here, we examined rectangular objects (in the transverse dimension) against a dark background with varying depth profiles. Figure 5 shows 2D frequency-averaged irradiance images of the objects in the top row and range images of the objects in the bottom row. Here, the images were formed with stationary objects (no motion) using conventional 3D imaging. Each column displays the images of a different object; the first three columns feature a rectangular object with a constant sloped surface. The slope of the surface is in the left-right direction and increases from 0 in (a) and (e) to 0.75 in (b) and (f) and to 3 in (c) and (g). The speckle noise decreases in the irradiance images from (a) to (c). This phenomenon is due solely to speckle decorrelation across the image collection time that is due to the SBP (the second method described in Sec. 2.3, because the object is stationary here). Van Zandt et al.28 observed similar speckle contrast reduction on sloped surfaces for 2D images formed with polychromatic illumination, which are analogous to the 2D frequency-averaged irradiance images simulated here. The range images look fairly clean in (e), (f), and (g), with a tiled appearance in the range images in (f) and (g). The final object, shown in (d) and (h), has a depth profile for which the right half of the object is flat and the left half has a slope of 3. Here, the range image in (h) looks clean, but the irradiance image has reduced speckle noise only over the left half of the image. Again, this is due to the fact that the SBP of the left half causes speckle decorrelation from pulse-to-pulse due purely to diffraction, and the SBP of the right half is zero.

Fig. 5

Conventional 3D imaging of tilted rectangular objects with different slopes and with a common speckle realization across the bandwidth (i.e., the objects were stationary). (a)–(d) Frequency-averaged 2D images and (e)–(h) corresponding 2D range images. (a) and (e) have a slope of 0, (b) and (f) have a slope of 0.75, (c) and (g) have a slope of 3. (d) and (h) An object with a left half that has a slope of 3 and a right half that has a slope of 0. Note: all range images (bottom row) have the same colorbar range.

OE_62_7_073103_f005.png

Figure 6 shows the results of our examination of the exact same objects but with two changes. First, we enforced independent speckle realizations from pulse-to-pulse to mimic a scenario in which the object is moving or vibrating during the image collection time. Second, we used 3D imaging with the pilot tone. The addition of the pilot tone is a necessity in this case as the independent speckle phases over the image collection would result in completely noisy range images if conventional 3D imaging were used. In this case, Figs. 6(a)6(d) feature greatly reduced speckle noise compared with Figs. 5(a)5(d) due to the independent speckle from pulse-to-pulse. Here, each irradiance image is the effective average of N (the number of frequencies in the ramp) independent speckle realizations. If the speckles do not fully decorrelate from pulse-to-pulse, then the number of independent speckle realizations, and the degree of speckle reduction, will be correspondingly lower. There are some key differences between the range images shown in Fig. 6 and those shown in Fig. 5. In Fig. 6(f), the tiled look is no longer as pronounced as in Fig. 5(f), due to the fact that the 3D irradiance image—from which the range image is obtained—is now relatively speckle-free, resulting in marginally cleaner range images. Interesting results appear in Figs. 5(g) and 5(h), where there is obvious noise, which we refer to as range chatter, over the highly sloped facets of the object. Fundamentally, this is due to speckle decorrelation between the pilot tone and the chirped illuminator, specifically for the pulse-pairs with a larger difference in frequency, |νnνp|, and thus a larger SBP [see Fig. 3(b)]. From Fig. 6, it is apparent that 3D imaging with the pilot tone completely compensates for speckle decorrelation due to object motion, but not due to diffraction angle differences between the chirped illuminator and the pilot tone. Recall that the range images in Fig. 6 would be complete noise if not for the addition of the pilot tone.

Fig. 6

Motion-compensated 3D imaging with a pilot tone of tilted rectangular objects with different slopes and with independent speckle realizations across the bandwidth (i.e., the objects were moving/vibrating). (a)–(d) Frequency-averaged 2D images and (e)–(h) corresponding 2D range images. (a) and (e) have a slope of 0, (b) and (f) have a slope of 0.75, (c) and (g) have a slope of 3. (d) and (h) An object with a left half that has a slope of 3 and a right half that has a slope of 0. Note: all range images (bottom row) have the same colorbar range.

OE_62_7_073103_f006.png

3.2.

Blurred and Reconstructed 3D Imagery Using Sharpness Metric Maximization: Case 4

We simulated isoplanatic atmospheric turbulence to discern the effects of transverse system aberrations on range chatter, which can occur for motion-compensated 3D imaging with a pilot tone, i.e., Case 4 of Fig. 2. Reference 7 simulated conventional 3D imaging in atmospheric turbulence and used SMM to correct the aberrations. In this study, the effects from turbulence on range images are fundamentally different with the inclusion of object motion and the pilot tone. In addition to transverse blurring, turbulence exacerbates range chatter over highly sloped object facets because of the increased width of the impulse response function, h(u,v), in Eq. (27). Figure 7 shows imagery of the scaled truck object with and without correction via SMM including (a) an aberrated 2D frequency-averaged irradiance image, (b) a corrected 2D frequency-averaged irradiance image, (c) an aberrated range image, and (d) a corrected range image. Both the aberrated and corrected irradiance images feature speckle contrast reduction due to object motion. In comparison, irradiance images formed by conventional 3D imaging (Case 1 in Fig. 2) have high speckle contrast, except for the case of highly sloped facets [cf., Figs. 5(c) and 5(d)]. In the aberrated range image in Fig. 7(c), there is an obvious range chatter over the sloped background and much of the truck that was not present in the range images in Ref. 7. The range chatter is much less severe over the front bumper of the truck, which has a smaller slope magnitude. From Fig. 7(d), it is clear that SMM improves the transverse blurring of the images as well as the range chatter over the highly sloped facets.

Fig. 7

SMM results for the scaled truck object showing (a) aberrated 2D frequency-averaged irradiance image, (b) corrected 2D frequency-averaged irradiance image, (c) aberrated range image, and (d) corrected range image.

OE_62_7_073103_f007.png

4.

Quantitative Results

Here, we present the results of a trade space exploration to better understand the effects of speckle decorrelation via the SBP for motion-compensated 3D imaging with a pilot tone, i.e., Case 4 in Fig. 1. To gauge 3D imaging performance, we define a range image error metric, σR, which is given by

Eq. (33)

σR=stdu,vW(u,v){mod[R^(u,v)R(u,v)+Δz/2,Δz]Δz/2},
where std is a standard deviation operator, W(u,v) is a window over which we calculate the standard deviation of each image, R^(u,v) is the range image generated by 3D imaging, and R(u,v) is the true range profile (known in simulations).

We performed four trade studies to further elucidate the effects of speckle decorrelation via the SBP, which all examine sloped rectangular objects such as the ones shown in Figs. 4 and 5. We experimented with different spectral weighting functions in the frequency domain to reduce the sidelobes in the range domain, in an effort to mitigate range chatter. Each spectral weighting function, S(ν), weighted the irradiance associated with the conjugate product images and had equivalent coherence times as defined by τc=0|S(ν)|2dν/|0S(ν)dν|2, where this definition is equivalent to the one in Eq. (5) of Ref. 29. In turn, the effective bandwidth of each function is inversely proportional to the coherence time, which we define as Δνeff=1/τc. Here, equivalent effective bandwidths also correspond to equivalent effective range resolutions. Figure 8 shows the four spectral weighting functions, which include rectangle, Gaussian, triangle, and Tukey (raised cosine) profiles.

Fig. 8

Four spectral weighting profiles used in Fig. 9 relative to the effective bandwidth.

OE_62_7_073103_f008.png

Figure 9(a) shows the resulting range error versus slope for each weighting function. The error for each profile increases monotonically due to the fact that the SBP increases as the slope increases, resulting in more speckle decorrelation between the chirped and pilot tones at the edges of the bandwidth. We used an effective bandwidth of Δνeff=7.5  GHz in this case, resulting in a critical slope of αc=3.00, which corresponds to a tilt angle of θ=71.6  deg. This agrees with Fig. 9(a), where the range error increases rapidly at or around this slope for all apodization profiles. The range error is highest for the rectangular weighting function, and the errors for the other weighting functions are fairly comparable. Because the non-rectangular weighting functions have larger absolute widths than the rectangle, which would increase speckle decorrelation for the frequencies at their edges, we can infer that sidelobe reduction due to the apodized, non-rectangle weighting functions causes a noticeable (though not large) reduction of range chatter. This trend is seen in all four studies. Note here that the error reaches an asymptote around a value of 0.0923 m, which corresponds to a value of Δz/12. This value is the standard deviation of a uniform probability distribution of width Δz, and the range error metric cannot exceed this value given the wrapped nature of the range images. Here we used a frequency spacing of 0.469 GHz.

Fig. 9

Range error for four spectral weighting profiles for the rectangular objects shown in Figs. 4 and 5 (a) versus slope, (b) versus effective bandwidth, (c) versus frequency spacing, and (d) versus D/r0. Plots (a), (b), and (d) all have the same δν values and thus the same range ambiguity interval. Plot (d) also includes results for images that have been corrected via SMM. Each plot shows the average of 10 independent realizations of speckle and turbulence (where applicable) with error bars corresponding to ±1 standard deviation.

OE_62_7_073103_f009.png

Figure 9(b) shows the range error versus bandwidth for the four weighting functions, and the trends show that range error increases monotonically with the effective bandwidth. This result, as well as the results in Fig. 9(a), agree with the notion that range chatter is caused by speckle decorrelation between the chirped and pilot tones via the SBP, and increasing the slope of the object or the effective bandwidth will increase the range chatter. Here the object slope was 3, and the frequency spacing was 0.469 GHz.

Figure 9(c) shows the normalized range error versus frequency spacing, δν. The previous two trade studies kept the frequency spacing constant; however, the frequency spacing changes here, which causes the range ambiguity interval to change for each data point. As a result, we report the normalized range error on the y-axis, where each point was normalized such that the error is relative to the same range ambiguity interval that is seen in Figs. 8(a) and 8(b). The normalized range error is defined as σ¯R=σR(Δzref/Δz), where Δzref is the reference range ambiguity interval. The results show that the normalized range error increases as the frequency spacing increases, motivating the use of as many laser pulses within the bandwidth as possible to mitigate range chatter. The reason for this trend is as follows: as the frequency spacing increases, the range ambiguity interval decreases while the effective bandwidth remains constant. This implies that the range resolution stays constant as well, but as the range ambiguity interval decreases, the energy in the z domain becomes more concentrated into a fewer number of range resolution elements. Because this is a coherent process, this results in speckles in the range domain that can become large when compared with the main lobe and, in turn, can introduce range chatter. The results for Fig. 9(c) used an object slope of 3 and an effective bandwidth of 15 GHz.

The final study, shown in Fig. 9(d) shows results of range error versus D/r0 for an effective bandwidth of 15 GHz and a slope of 3 both with and without aberration correction via SMM. The results confirm the qualitative results in Fig. 7, that is, range chatter increases with the turbulence strength over sloped facets of the object. If the object had a slope of 0, then we would see aberrations in the range image consistent with the transverse blur seen in Ref. 7 instead of range chatter. In addition, we see that SMM reduces range chatter over the object for all D/r0 and that the range chatter after SMM is nearly constant for all turbulence strengths. Here, the object slope was 3, the effective bandwidth was 15 GHz, and the frequency spacing was 0.469 GHz.

5.

Conclusion

This study explained and demonstrated the theory behind conventional, multi-wavelength 3D imaging as well as motion-compensated, multi-wavelength 3D imaging with the addition of a pilot tone, which is pertinent for long-range imaging of objects in motion. We explored the effects of two methods of speckle decorrelation over the bandwidth: decorrelation due to object motion/vibration and decorrelation due to the SBP of a facet of the object. Speckle decorrelation via object motion quickly degraded the range image quality for conventional 3D imaging, whereas motion-compensated 3D imaging with the pilot tone was extremely robust to decorrelation via object motion. However, motion-compensated 3D imaging was still susceptible to the second speckle decorrelation method: decorrelation via the SBP. It is important to note that 2D frequency-averaged irradiance images can experience speckle contrast reduction if either method of speckle decorrelation occurs, whether the pilot tone is used or not. In addition, range chatter only appeared for motion-compensated 3D imaging with a pilot tone, as shown in Figs. 5(g) and 5(h), when both high SBPs and independent speckle from pulse-pair to pulse-pair were present. In that case, conventional 3D imaging completely failed due to the independent speckle realizations, thus demonstrating that range chatter, although an unfavorable effect, is not a tradeoff of motion-compensated 3D imaging when compared with conventional 3D imaging (see Fig. 2). The quantitative results in Fig. 9 showed that the range error increases as a function of the object slope, optical bandwidth, frequency spacing, and D/r0. We also showed that apodization in the frequency domain can improve range chatter via sidelobe reduction and that SMM can improve range chatter as well. This study provides a theoretical baseline and a simulation-based characterization for long-range 3D imaging using a pilot tone. As such, this study should be compared with experimental results from field data in the future.

6.

Appendix: Derivation of the SBP from the Grating Equation

For a more in-depth analysis of speckle decorrelation with wavelength, we treat the rough object as a superposition of diffraction gratings. Any one bright speckle of the pupil-plane field can be considered to have come from a Fourier component of the object’s scattering function, which is represented as a grating obeying the grating equation given by

Eq. (34)

Λ[sin(θ2)sin(θ1)]=mλ,
where Λ is the period of the diffraction grating for that speckle that diffracts, for the m’th order of diffraction, into the outgoing angle of θ2 for an incident angle of θ1. First, we solve for Λ for a given set of θ1 and θ2 yielding

Eq. (35)

Λ=mλsin(θ2)sin(θ1).

We now treat Λ as a constant and seek to find the behavior of the outgoing angle θ2 for small changes in λ, that is, dθ2/dλ, as this will determine the behavior of the bright speckle, and hence the speckle pattern, as a function of frequency.

To accomplish this, we differentiate both sides of Eq. (34) while treating everything constant except λ and θ2, yielding

Eq. (36)

mdλ=Λd[sin(θ2)]=Λcos(θ2)dθ2dθ2dλ=mΛcos(θ2).

Inserting Eq. (35) and simplifying gives

Eq. (37)

dθ2dλ=sin(θ2)sin(θ1)λcos(θ2).

In our monostatic case, θ2θ1, which simplifies Eq. (37) further. Figure 10 shows this monostatic case in which the grating component of interest is the one that scatters directly back into the same direction as the incident light. Using dθ2=dx/z0, where dx is the spatial coordinate differential in the pupil plane, and writing dλ in terms of dν, Eq. (37) yields

Eq. (38)

dxdν=2tan(θ1)z0ν=2αz0ν.

Fig. 10

Diagram of the monostatic imaging setup simulated here. In this case, the speckles in the pupil experience a translation when the illumination frequency changes purely due to diffraction. More explicitly, the speckle pattern contracts or expands about the mirror reflection angle vector, shown by the dashed line, for changes in frequency, and this contraction/expansion manifests as a translation over the small angular subtense of the pupil (when viewed from the object). The translation is proportional to the tilt angle of the object for a small change in frequency.

OE_62_7_073103_f010.png

Because tan(θ1) is the slope, α, for a facet of the object, we see that Eq. (38) is a slightly altered version of Eq. (22) in the case in which Δν1,2 and Δx become differentials. Equation (38) informs us that, purely due to diffraction, the speckle patterns observed in the pupil plane will shift as the source illumination frequency changes. Thus, this effect is present even after ignoring the phase term in Eq. (15) that was dropped. Moreover, the shift amount that we expect in the pupil plane varies proportionally with α, so we expect speckle decorrelation effects to occur more for highly sloped object facets.

We can now see how Eq. (38) agrees with Goodman’s statement that speckle patterns expand/contract as the illumination frequency changes. The key here is that, with increasing optical frequency, speckle patterns contract about the mirror reflection angle vector of a given object facet. For our monostatic setup and for highly sloped facets, the mirror reflection angle will be rotated far away from the optical axis, so the contraction/expansion effect will be more noticeable over the pupil [see Fig. 10]. In these simulations, the pupil itself has a small enough angular subtense that the contraction effect can be described as a translation of the speckles over the pupil. For shallow object facets, the contraction effect is much less pronounced because the optical axis and the mirror reflection angle will be much closer to each other, and, for the small bandwidths used here, there can be negligible movement of the speckles in the pupil.

Data Availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Acknowledgments

The authors of this paper would like to thank Joe Riley (MZA) for providing the reflectance and range maps for the scaled truck used in this paper and Brian Krause (Lockheed Martin) for many discussions on motion-compensated 3D imaging with a pilot tone. In addition, the authors thank the SMART Scholarship program and the Air Force Research Laboratory (Grant No. FA8649-20-C-0318) for funding this effort. The views expressed in this paper are not necessarily endorsed by the sponsor.

References

1. 

J. C. Marron and K. S. Schroeder, “Three-dimensional lensless imaging using laser frequency diversity,” Appl. Opt., 31 255 –262 https://doi.org/10.1364/AO.31.000255 APOPAI 0003-6935 (1992). Google Scholar

2. 

L. G. Shirley and G. R. Hallerman, “Nonconventional 3D imaging using wavelength-dependent speckle,” Lincoln Lab. J., 9 (2), 153 –186 LLJOEJ 0896-4130 (1996). Google Scholar

3. 

B. W. Krause, B. G. Tiemann and P. Gatt, “Motion compensated frequency modulated continuous wave 3D coherent imaging LADAR with scannerless architecture,” Appl. Opt., 51 (36), 8745 –8761 https://doi.org/10.1364/AO.51.008745 APOPAI 0003-6935 (2012). Google Scholar

4. 

B. W. Krause, “Motion compensated multi-wavelength digital holography,” (2017). Google Scholar

5. 

J. W. Goodman, Speckle Phenomena in Optics: Theory and Applications, 2nd ed.SPIE Press, Bellingham, Washington, United States (2020). Google Scholar

6. 

W. E. Farriss et al., “Sharpness-based correction methods in holographic aperture LADAR (HAL),” Proc. SPIE, 10772 107720K https://doi.org/10.1117/12.2320630 PSISDG 0277-786X (2018). Google Scholar

7. 

M. T. Banet et al., “3D multi-plane sharpness metric maximization with variable corrective phase screens,” Appl. Opt., 60 (25), G243 –G252 https://doi.org/10.1364/AO.427719 APOPAI 0003-6935 (2021). Google Scholar

8. 

M. T. Banet and J. R. Fienup, “Image sharpening on 3D intensity data in deep turbulence with scintillated illumination,” Proc. SPIE, 11836 118360E https://doi.org/10.1117/12.2594986 PSISDG 0277-786X (2021). Google Scholar

9. 

S. T. Thurman and A. Bratcher, “Multiplexed synthetic-aperture digital holography,” Appl. Opt., 54 (3), 559 –568 https://doi.org/10.1364/AO.54.000559 APOPAI 0003-6935 (2015). Google Scholar

10. 

M. T. Banet and J. R. Fienup, “Effects of speckle decorrelation on motion-compensated, multi-wavelength, 3D digital holography,” Proc. SPIE, 12239 122390C https://doi.org/10.1117/12.2633068 PSISDG 0277-786X (2022). Google Scholar

11. 

M. F. Spencer et al., “Deep-turbulence wavefront sensing using digital-holographic detection in the off-axis image plane recording geometry,” Opt. Eng., 56 (3), 031213 https://doi.org/10.1117/1.OE.56.3.031213 (2016). Google Scholar

12. 

M. T. Banet, M. F. Spencer and R. A. Raynor, “Digital-holographic detection in the off-axis pupil plane recording geometry for deep-turbulence wavefront sensing,” Appl. Opt., 57 (3), 465 –475 https://doi.org/10.1364/AO.57.000465 APOPAI 0003-6935 (2018). Google Scholar

13. 

D. E. Thornton, M. F. Spencer and G. P. Perram, “Deep-turbulence wavefront sensing using digital holography in the on-axis phase shifting recording geometry with comparisons to the self-referencing interferometer,” Appl. Opt., 58 (5), A179 –A189 https://doi.org/10.1364/AO.58.00A179 APOPAI 0003-6935 (2019). Google Scholar

14. 

J. W. Stafford, B. D. Duncan and D. J. Rabb, “Phase gradient algorithm method for three-dimensional holographic LADAR imaging,” Appl. Opt., 55 (17), 4611 –4620 https://doi.org/10.1364/AO.55.004611 APOPAI 0003-6935 (2016). Google Scholar

15. 

N. R. Van Zandt et al., “Polychromatic wave-optics models for image-plane speckle. 1. Well-resolved objects,” Appl. Opt., 57 (15), 4090 –4102 https://doi.org/10.1364/AO.57.004090 APOPAI 0003-6935 (2018). Google Scholar

16. 

D. J. Burrell et al., “Wave-optics simulation of dynamic speckle: I. in a pupil plane,” Appl. Opt., 60 (25), G64 –G76 https://doi.org/10.1364/AO.427963 APOPAI 0003-6935 (2021). Google Scholar

17. 

D. J. Burrell et al., “Wave-optics simulation of dynamic speckle: II. in an image plane,” Appl. Opt., 60 (25), G77 –G90 https://doi.org/10.1364/AO.427964 APOPAI 0003-6935 (2021). Google Scholar

18. 

D. G. Voelz, Computational Fourier Optics: a MATLAB Tutorial, SPIE Press, Bellingham, Washington, United States (2011). Google Scholar

19. 

M. T. Banet and M. F. Spencer, “Multiplexed digital holography for atmospheric characterization,” in Propag. Through and Characterization of Atmos. and Ocean. Phenom., PTh1D–2 (2019). https://doi.org/10.1364/PCAOP.2019.PTh1D.2 Google Scholar

20. 

M. T. Banet and M. F. Spencer, “Multiplexed digital holography for simultaneous imaging and wavefront sensing,” Proc. SPIE, 11135 1113503 https://doi.org/10.1117/12.2528940 PSISDG 0277-786X (2019). Google Scholar

21. 

J. Haus et al., “Instantaneously captured images using multiwavelength digital holography,” Proc. SPIE, 8493 84930W https://doi.org/10.1117/12.932280 PSISDG 0277-786X (2012). Google Scholar

22. 

L. C. Andrews and R. L. Phillips, Laser Beam Propagation Through Random Media, SPIE Press, Bellingham, Washington, United States (2005). Google Scholar

23. 

R. Lane, A. Glindemann and J. Dainty, “Simulation of a Kolmogorov phase screen,” Waves Rand. Media, 2 (3), 209 https://doi.org/10.1088/0959-7174/2/3/003 WRMEEV 0959-7174 (1992). Google Scholar

24. 

J. R. Fienup and J. J. Miller, “Aberration correction by maximizing generalized sharpness metrics,” J. Opt. Soc. Am. A, 20 (4), 609 –620 https://doi.org/10.1364/JOSAA.20.000609 JOAOD6 0740-3232 (2003). Google Scholar

26. 

A. S. Jurling and J. R. Fienup, “Applications of algorithmic differentiation to phase retrieval algorithms,” J. Opt. Soc. Am. A, 31 (7), 1348 –1359 https://doi.org/10.1364/JOSAA.31.001348 JOAOD6 0740-3232 (2014). Google Scholar

27. 

S. T. Thurman and J. R. Fienup, “Phase-error correction in digital holography,” J. Opt. Soc. Am. A, 25 (4), 983 –994 https://doi.org/10.1364/JOSAA.25.000983 JOAOD6 0740-3232 (2008). Google Scholar

28. 

N. R. Van Zandt, J. E. McCrae and S. T. Fiorino, “Modeled and measured image-plane polychromatic speckle contrast,” Opt. Eng., 55 (2), 024106 https://doi.org/10.1117/1.OE.55.2.024106 (2016). Google Scholar

29. 

L. Mandel and E. Wolf, “Coherence properties of optical fields,” Rev. Mod. Phys., 37 (2), 231 https://doi.org/10.1103/RevModPhys.37.231 RMPHAT 0034-6861 (1965). Google Scholar

Biography

Matthias T. Banet received his BS degree in physics from New Mexico Institute of Mining and Technology, his MS degree in optics from the University of Rochester, and his PhD from the University of Rochester under James Fienup. His research interests include digital holography, wave optics, 3D imaging, and imaging through atmospheric turbulence. In addition, he recently joined the Air Force Research Laboratory, Directed Energy Directorate as a research physicist. He is an active member of the University of Rochester SPIE Chapter and is a Directed Energy Professional Society (DEPS) member and multiple-times recipient of the DEPS graduate research grant.

James R. Fienup received his AB degree from Holy Cross College and his MS degree and PhD in applied physics from Stanford University, where he was a National Science Foundation graduate fellow. After performing research at ERIM, he became the Robert E. Hopkins Professor of Optics at the University of Rochester. He is a fellow of SPIE and Optica/OSA, a member of the National Academy of Engineering, and a recipient of SPIE’s Rudolf Kingslake Medal and Prize.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 International License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Matthias T. Banet and James R. Fienup "Speckle decorrelation effects on motion-compensated, multi-wavelength 3D digital holography: theory and simulations," Optical Engineering 62(7), 073103 (12 July 2023). https://doi.org/10.1117/1.OE.62.7.073103
Received: 7 March 2023; Accepted: 22 June 2023; Published: 12 July 2023
Lens.org Logo
CITATIONS
Cited by 2 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Speckle

Stereoscopy

3D image processing

Speckle pattern

Simulations

Light sources and illumination

Optical engineering

Back to Top