Open Access
23 August 2013 Reconstruction of long horizontal-path images under anisoplanatic conditions using multiframe blind deconvolution
Author Affiliations +
Abstract
All optical systems that operate in or through the atmosphere suffer from turbulence induced image blur. Both military and civilian surveillance, gun sighting, and target identification systems are interested in terrestrial imaging over very long horizontal paths, but atmospheric turbulence can blur the resulting images beyond usefulness. This work explores the mean square error (MSE) performance of a multiframe blind deconvolution (MFBD) technique applied under anisoplanatic conditions for both Gaussian and Poisson noise model assumptions. The technique is evaluated for use in reconstructing images of scenes corrupted by turbulence in long horizontal-path imaging scenarios. Performance is evaluated via the reconstruction of a common object from three sets of simulated turbulence degraded imagery representing low, moderate, and severe turbulence conditions. Each set consisted of 1000 simulated turbulence degraded images. The MSE performance of the estimator is evaluated as a function of the number of images, and the number of Zernike polynomial terms used to characterize the point spread function. A Gaussian noise model-based MFBD algorithm reconstructs objects that showed as much as 40% improvement in MSE with as few as 14 frames and 30 Zernike coefficients used in the reconstruction, despite the presence of anisoplanatism in the data. An MFBD algorithm based on the Poisson noise model required a minimum of 50 frames to achieve significant improvement over the average MSE for the data set. Reconstructed objects show as much as 38% improvement in MSE using 175 frames and 30 Zernike coefficients in the reconstruction.

1.

Introduction

The goal of this article is to use a parameterized, multiframe blind deconvolution (MFBD) technique to reconstruct an object estimate from a set of simulated anisoplanatic images, and examine the mean square error (MSE) performance of the estimator as the parameters are varied. This article examines estimator performance as the number of frames used in the estimation is varied, and as the number of Zernike polynomial coefficients used to characterize the phase term of the point spread function are varied under the assumption of both Gaussian and Poisson noise distributions.

Every optical system using light that has propagated any appreciable distance through the atmosphere will suffer, to some degree, from turbulence induced phase aberrations. In addition to phase errors at the aperture, light propagating over longer distances or through stronger turbulence, will cause images to suffer from anisoplanatic, and possibly scintillation effects as well. Often the image blur induced by these phase aberrations is the limiting factor in the ability to recognize details of objects in the scene.

Under isoplanatic conditions, the light coming from all points in the scene can be assumed to experience similar turbulence induced aberrations. The isoplanatic angle θ0 is the angular separation between point sources for which the phase changes at the aperture can be considered to be significantly decorrelated. However, in many near-surface surveillance imaging scenarios, it is reasonable to assume that the field of view of the imaging system will subtend an angle wide enough that this assumption will not be valid. In this case, we describe the viewing as anisoplanatic. The longer the optical path length and the stronger the turbulence, the more severe these aberrations become, and the isoplanatic angle decreases. Increasing the size of the aperture will not improve the quality of the image under anisoplanatic conditions. Unless the seeing conditions are very favorable, anisoplanatism will play a role in most practical horizontal imaging situations. Some technique for reducing the effects of anisoplanatism is desired.

A variety of techniques have been devised to correct the turbulence induced phase aberrations in vertical imaging applications. Adaptive optic strategies using wave front sensors to control deformable mirrors have been used in celestial observation systems for many years.1 Techniques that exploit deliberately imposed, known phase diversity24 have also been used with some success. Paxman and Schulz explored this problem by creating phase diversity across multiple speckle images. This technique uses two simultaneous measurements—an in-focus image and another with a known degree of defocus applied before the other measurement is taken.5 This technique is limited to fields of view that do not appreciably exceed the isoplanatic angle existing at the moment the image was captured, require substantial hardware, and divide photons between two detectors. Post-detection processing of wide field of view images captured with short-exposure times is another alternative. Fraser et al. described a technique for point-by-point registration of anisoplanatic speckle images to reduce motion blur and prepare the images for other deconvolution strategies.6 Ayers and Dainty pioneered the application of an iterative blind deconvolution technique to a single image degraded by atmospheric turbulence.7 Their method relied on the enforcement of positivity and finite support constraints on the object estimate. Schulz extended that method to include multiple input images and developed a penalized-maximum-likelihood algorithm to avoid the trivial solution that incorrectly concludes that the optical system’s point spread function (PSF) is a Dirac delta function and the most likely object estimate is the observed turbulent image.8 Hybrid hardware–software strategies offer the potential to produce on-the-fly estimates of scenes, but require substantial investment in both hardware and software to produce results.9 Bos and Roggemann10 have reported the use of software reconstruction techniques using the bi-spectrum method in nearly real time. The use of these strategies to surveillance imaging is largely unexplored.

This article describes a method of jointly estimating object intensities and imaging system PSFs from simulated anisoplanatic images that have been corrupted by atmospheric turbulence. The image model that forms the foundation of this estimator is that of a linear shift invariant PSF and a deterministic object. It is conjectured that anisoplanatic effects of the turbulent atmosphere are compensated for by the estimator by reconstructing a spatialy averaged PSF. Bos’s11 work using cross-spectrum and bi-spectrum phase reconstructions points to this potential solution. Carrano12 has also published a work in this area that neglects the anisoplanatic effects. This investigation will be the subject of another article. The method developed here is applied to three sets of images with varying levels of turbulence, and the effectiveness is assessed by calculating the MSE between the resulting recovered object and the diffraction limited image.

We find that the MFBD reconstructed objects show significant improvement in MSE compared to the average MSE between all the images in a data set and the associated diffraction limited image. The improvement in MSE was 40% for the low turbulence case, 25% for moderate turbulence, and 36% for severe turbulence case. We also provide an estimate of the optimum number of images and Zernike coefficients to use in the future work with MFBD reconstructions.

The remainder of this article is organized as follows. In Sec. 2, we discuss the horizontal imaging problem and briefly describe the simulation that produced the data sets used in the study. In Sec. 3, the object recovery methods for the Gaussian case is described followed by the Poisson case. In Sec. 4, the results of both the Gaussian and Poisson case reconstructions are presented. Finally, some thoughts on processing and conclusions regarding the technique are provided in Sec. 5.

2.

Background

We now describe the MFBD algorithm for the Gaussian and Poisson noise models. In MFBD, the input is a set of measured noisy and turbulence corrupted images. In a stack of K turbulence corrupted, but measurement noise-free images, the k’th image can be described as the convolution of an unchanging object in space convolved with the PSF of the optical system s(x). Mathematically, this can be expressed as13

Eq. (1)

gk(x)=o(x)sk(x),
where represents the two-dimensional (2-D) convolution operator and x is a 2-D coordinate in object space. The expression gk(x) represents the k’th noiseless image, f(x) is the irradiance distribution of the object in the object plane, and sk(x) is the k’th incoherent PSF.

The PSF is the modulus squared of the coherent impulse response |hk(x)|2, which is in turn the inverse Fourier transform of the generalized pupil function. Mathematically, these relationships are given by

Eq. (2)

sk(x)=|hk(x)|2=|F1[Hk(u)]|2,
where u is a 2-D coordinate in pupil space. The generalized pupil function is described by

Eq. (3)

Hk(u)=|H(u)|ejϕk(u),
where ϕk(u) is the combination of phase aberrations caused by the differences in path length and diffraction in the imaging system. This aberration function can be expressed as the weighted sum of a set of orthonormal functions

Eq. (4)

ϕ˜k(u,α)j=1Jαj,kϕj(u),
where the coefficients αj,k serve to weight the basis functions ϕj. Zernike polynomials are a common set of orthonormal basis functions used to characterize both fixed and random aberrations in imaging systems and are frequently used to describe turbulence effects on imaging.14 We will assume that the simulated images dk(x) are a series of short-exposure images where the object in the scene remains constant, but the phases ϕk(u) associated with each PSF are random in each image frame in the stack. This lets us express the generalized pupil function as a function of both the spatial frequency and the vector of Zernike coefficients αj,k

Eq. (5)

Hk(u,αj,k)=|H(u)|ejϕk(u,α),
which allows us to approximate the k’th aberrated PSF as a weighted vector of Zernike polynomials

Eq. (6)

sk(x,α)=|hk(x,α)|2=|F1[Hk(u,α)]|2.

In nonblind deconvolution problems, the data collected, dk(x), is used with a known PSF sk(x) to determine f(x). In blind deconvolution, we are given dk(x) and use that information to estimate both the object, f(x) and the PSF sk(x) jointly. There is no closed form solution to the problem of jointly estimating an object and the aberration parameters for each image frame. Hence, an iterative approach is needed to find the object pixel intensities and Zernike coefficients that are most likely to have resulted in the simulated data for each image. In the next section, we describe two such approaches, one based on a Gaussian noise model and other based on a Poisson noise model.

2.1.

Data Set

It is common to simulate the effects of the turbulent atmosphere by placing layers of uniform turbulence between the object and the imaging system. The data set consisting of 1000 simulated turbulent images used in this article was created using an image common in the literature. Five Kolmogorov phase screens were generated. The image was propagated over a distance of 1000 m. Light from each object pixel was projected through the phase screens, in turn, at 200-m separations using a geometric optics approach to account for the effects of anisoplanatism. Phase errors accumulating from each screen are combined at the pupil to create a turbulence degraded PSF. Each of the PSFs is then scaled by the object pixel intensities to create a turbulence corrupted image for low, moderate, and severe turbulence conditions. Parameters for the simulated imaging system include a 10-cm aperture with a 358×358pixel detector and a 0.7-mm pixel pitch. A fuller description of the simulator used to create this data set is available in Ref. 11.

For the conditions simulated here, a single pixel in the simulated imaging system captures 2.79 μrad. Expressing the θ0 values for the low, medium, and severe turbulence conditions of the simulation, we see in Table 1 that the isoplanatic patch covers 4, 3, and 2 pixels in the simulated imaging system (Fig. 1).

Table 1

Atmospheric simulation turbulence conditions.

Atmospheric turbulence parameters
Severity conditionCN2 (m(2/3))r0 spherical case (cm)θ0 (μrad)θ0 (pixels)
Low2.25×10143.3310.84
Moderate3.75×10142.457.753
High5.25×10142.016.632

Fig. 1

Horizontal imaging simulator output. Single image representative samples of the horizontal image simulator output: (a) diffraction-limited image, (b) image for CN2=2.25×1014m(2/3), (c) image for 3.75×1014m(2/3), and (d) image for 5.25×1014m(2/3).

OE_52_8_083108_f001.png

2.2.

Gaussian Noise Model MFBD

Extending the image formation equations described previously in Eqs. (1)–(4), we can describe a set of images that have been corrupted by additive Gaussian noise

Eq. (7)

dk(x)=gk(x)+nk(x)=f(x)×sk(x,αk)+nk(x),
where nk(x) represents an additive noise term characterized by an independent, identically distributed Gaussian random variable with 0 mean and variance σ2. Using a Gaussian measurement noise model, each image dk(x) is a random variable with a Gaussian probability density function. The pdf of dk(x) is parameterized by the object intensities f(x) and the vector of aberration weighting coefficients αk

Eq. (8)

p[dk(x);f(x,αk)]=1(2πsn2)1/2exp{[dk(x),αk)gk(x)]22πσn2}
and the likelihood of the complete data set consisting of all the pixel intensities in all the corrupted images is given by

Eq. (9)

p[{dk(x)};f(x,αk)]=k=1Kxχ1(2πσn2)1/2exp{[dk(x)gk(x)]22πσn2}.

The natural log of Eq. (9) is taken in order to make the analysis more tractable, resulting in a summation rather than products and neglecting a constant term, yields the log-likelihood function

Eq. (10)

L[f(x,αk)]=k=1Kxϵχ[dk(x)gk(x)]2.

Although an analytic form of the Hessian is not required, the limited-memory Broyden–Fletcher–Goldfarb–Shanno (L-BFGS) optimization used to maximize the likelihood function Eq. (10) is more efficient if an analytic form of the gradient is provided. With respect to the pixel intensities, the gradient of the Gaussian log-likelihood function can be represented as

Eq. (11)

fL[f(x,α)]=2k=1Kxϵχ[dk(x)gk(x)]fgk(x)
and the gradient with respect to the Zernike coefficients is

Eq. (12)

αL[f(x,α)]=2k=1Kxϵχ[dk(x)gk(x)]αgk(x).

2.3.

Poisson Noise Model MFBD

Not all images are taken in full daylight. At low light levels, photon noise may dominate image frames. This is often characterized by a speckled quality of the images. Photon noise in images is described by modeling the number of photons detected in an image frame at each pixel as a Poisson random variable with a mean photon count rate λ, which is proportional to average pixel intensity. For this simulation, the number of photons detected at each detector pixel is assumed to be an independent, Poisson distributed random variable with a mean rate given by a noiseless diffraction-limited image g(x). The random nature of the PSF g(x) is neglected. The probability of detecting dk(x) photons at a specific pixel location is given by

Eq. (13)

p[dk(x)]=g(x)dk(x)e[gk(x)]dk(x)!.

The distribution over the entire set of pixel locations dk is given by

Eq. (14)

p[(dk)]=k=1Kxϵχg(x)dk(x)e[gk(x)]dk(x)!.

As before, taking the natural log yields a modified log-likelihood function

Eq. (15)

LPoisson[f(x,α)]=k=1Kxϵχ{dk(x)ln[gk(x)]gk(x)}k=1Kxϵχdk(x),
where the last term is a constant and can be neglected. Taking the derivative with respect to the pixel intensities, the gradient of the Poisson log-likelihood function can be represented as

Eq. (16)

fLPoisson[f(x,α)]=k=1Kxϵχ[dk(x)gk(x)1]fgk(x).

With respect to the Zernike coefficients, the gradient of the Poisson log-likelihood function can be represented as

Eq. (17)

αLPoisson[f(x,α)]=k=1Kχ[dk(x)gk(x)1]αgk(x).

3.

Methods

Our simulations assume that the propagation occurs over horizontally homogeneous conditions with both the object and the imaging system immersed in a turbulent atmosphere. Furthermore, we assume that the height above ground does not vary significantly and Cn2 is a constant over the propagation path.15 We assume that the simulated data has effectively frozen the turbulence at the moment the turbulent image is created. Prior to applying the simulated turbulent images to the reconstruction algorithm, they must be recentered as tilt is not estimated in the MFBD algorithm. This was accomplished by using a correlation filter to compare each image in the stack to the ensemble average image and then shifting the turbulent image to recenter it. In order to reduce the aliasing associated with the finite support, each frame of the data set was preprocessed to pad the centered image by replicating the edges of the image outward and then adding a border of 0. The abrupt transitions artificially introduced by the padding process can result in high spatial frequency components that are sometimes mitigated by the application of spatial filters. Using 15 frames in the reconstruction, the image stack was padded and then a Tukey16 tapered filtered was applied to the image. Both the tapered and untapered images were applied to the estimator. The elapsed processing time and MSE of the reconstructed object were determined with the estimator limited to 20 iterations. The amount of padding for subsequent processing was determined by examining the effect on the processing time and the MSE as the amount of padding was varied. All subsequent processing was accomplished by padding each recentered turbulent image but without tapering it. The images are applied to the estimator with eight replicated pixels followed by five 0 pixels at the margins of each image, bringing the total size of the image to 256×256pixels as seen in Fig. 2. These results are summarized in Table 2.

Fig. 2

Padding example: unpadded image (a), and padded image (b).

OE_52_8_083108_f002.png

Table 2

Selection of image padding.

Padding, filter versus MSE and elapsed time
Pixels repeat + 0w/o filterw/filterTotal image size N×N
MSETime (s)MSETime (s)
8+5585181712166256
9+5n/an/a586306258
10+5586221625202260
15+5586252593230270
20+5583268595276280
25+5588392591361290

3.1.

L-BFGS Optimization

The cost functions in Eqs. (10) and (15) are parameterized by the object pixel intensities and aberration coefficients, and are applied to a nonlinear optimization MATLAB routine to find the object and aberration coefficients most likely to have produced the images that were simulated in the data set. The intensities at each pixel location in each image are vectorized. The vectorized initial guesses for each image’s Zernike polynomial coefficients are appended to the end of the vector of image intensities formatted as shown in Table 3. We are jointly processing all images and all Zernike coefficients, thus for a data set of K, N×N images, using J Zernike polynomial terms, there will be N2+J×K parameters that must be jointly estimated. The optimization routine will return a vector of the reconstructed object’s intensities followed by the estimate of the Zernike coefficients for each frame of the input stack as shown in Table 4. Optimization over such a large parameter space is impractical using conventional optimization techniques. To make the optimization tractable, we use the L-BFGS method to process the images. L-BFGS is a quasi-Newtonian “hill-climbing” technique that begins with an initial guess at a solution for x0 and then proceeds along a line in the direction pointed to by the gradient of the objective function evaluated at each pixel location. One of the drawbacks to searching along the gradient is the need for the Hessian 2f(x) to prevent the estimate from hopping back and forth across the sides of the valley. The limited memory form of the BFGS does not require an explicit expression for 2. It estimates the value of the Hessian matrix by maintaining the last few updates of f(x) and f(x). A Quasi-Newtonian line search optimization can quickly converge to a local minimum for cost functions, but there is no guarantee that the minimum is a global minimum. In processing the initial object estimate, the average of all the frames used in the trial was applied to the estimator.

Table 3

Input object and Zernike coefficients. The current stack of images being processed is spatially averaged and stripped into a vector with the intensities at the beginning and the initial guess at each image’s Zernike coeffiecients at the end.

Estimator input parameters
o¯1o¯1o¯N2α¯1,1α^1,2α^1,Jα^K,1α^K,2α^K,J

Table 4

Vectorized reconstructed object and Zernike coefficients. The estimate is returned as a vector with the estimated object pixel intensities at the beginning and the estimate of each input image’s Zernike coeffiecients at the end.

Output object estimate and Zernike coefficient vector
o^1o^1o^N2α^1,1α^1,2α^1,Jα^K,1α^K,2α^K,J

It is necessary to provide the estimator with a stopping criterion. Using the low condition data set with the number of frames set to 5, 15, and 35, the log-likelihood function value was monitored during reconstruction. The results are shown in Fig. 3. Regardless of the number of frames used in the reconstruction, assuming one call to the likelihood function per iteration, the reconstruction is essentially complete after 10 iterations. For the image reconstruction processing in this article, the number of iterations was limited to 25.

Fig. 3

Gaussian MSE versus number of function calls.

OE_52_8_083108_f003.png

3.2.

Reconstruction Processing

Reconstruction processing begins by selecting images from the complete data set in groups of K=2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 23, and 25 frames and then incrementing that group size through the entire data set. At each increment an initial guess at the object f(x0) and phase parameters αi is provided to the optimization routine. This initial guess is always the average of the K frames being used in the estimate. The Zernike coefficients provided as an initial guess are random Gaussian numbers with a mean of 0.5 and unity variance.

The recovered image was compared to the diffraction limited image and the MSE determined. The MSE is averaged over all pixels and determined as follows:

Eq. (18)

MSE=1K[f(x)f^(x)]2N2,
where f(x) is the normalized diffraction limited image, f^(x) is the current normalized estimate of the object, and N2 is the total number of pixels in the image. Then, the image stack was incremented to begin processing the next group of K turbulent images and the process was repeated. When the entire data set has been processed, the average of the vector of MSE’s for images processed K at a time was calculated.

3.3.

Number of Zernike Terms Needed in the Optimization Process

Recovering a common object estimate from the stack of degraded images is computationally intense regardless of the method used. Using more Zernike polynomial terms require more variables to be estimated, and longer processing will result. Figure 4(a) shows the processing time required for a fixed number of frames when the number of Zernike polynomial terms is varied. Figure 4(b) shows how the processing time varies for a fixed number of Zernike coefficients as the number of input images is varied. Of greater impact on processing time is the number of images used to recover the object. For a set of J, N×N images, the number of variables increases as J×N2. Previous work indicated that 15 images and 35 Zernike terms would provide a good estimate of the object.17 Further exploration over a larger data set yielded similar results and additional insight into the estimator’s performance. With the number of images set to 50 in order to reduce the influence of the number of images on the outcome, the number of Zernike coefficients was varied from 10 to 100 terms for all three turbulence conditions as shown in Fig. 5. For all three turbulence cases of Cn2, additional terms beyond 60 do not significantly improve the MSE.

Fig. 4

Processing time versus number of Zernike terms (a), and number of frames used in reconstruction (b).

OE_52_8_083108_f004.png

Fig. 5

Mean square error (MSE) versus number of Zernike coefficients. With the number of frames held constant at 50, the number of Zernike coefficients are varied from 10 to 100 for (a) low condition CN2=2.25×1014m(2/3), (b) moderate condition CN2=3.75×1014m(2/3), and (c) severe condition CN2=5.25×1014m(2/3).

OE_52_8_083108_f005.png

4.

Results

4.1.

Gaussian Noise Model

4.1.1.

Case 1 low condition CN2=2.25×1014m(2/3)

Examining Fig. 6, we see that at N=2 and thereafter the estimator can be expected to perform better than the average MSE for the simulated image which was 673. Marginal improvement in MSE declines at N=12, reaching a maximum improvement of approximately 40% over the average MSE over the entire data set. However, if processing time is not of consequence, the MSE and its standard deviation continues to improve as additional images are added. Examining the results of the Zernike term evaluation shown in Fig. 5, 60 Zernike coefficients were used to characterize the PSFs and the results are compared to the estimator’s performance using 30 Zernike terms in Fig. 6. The use of additional Zernike terms does not add as much processing time as using additional frames, but each reconstruction will take longer as additional Zernike terms are used to characterize the PSFs. The incremental improvement in MSE is not worth the additional time consumed. The diffraction limited image is compared to a sample of the simulated turbulent image data set images and a sample reconstructed object in Fig. 7.

Fig. 6

MSE versus number of frames. Case 1 low turbulence CN2=2.25×1014 with 30 and 60 Zernike terms.

OE_52_8_083108_f006.png

Fig. 7

Case 1 sample images. Compares the (a) diffraction-limited image with (b) the single sample image and (c) a sample reconstructed object.

OE_52_8_083108_f007.png

4.1.2.

Case 2 moderate condition CN2=3.75×1014m(2/3)

Figure 8 shows (a) the diffraction limited image, (b) a sample recorded image, and (c) a sample reconstructed object. We see in Fig. 9 that the MFBD estimator will consistently perform on average better than the average image’s error as soon as the size of the processing window reaches two frames. At N=2 and thereafter the estimator can be expected to perform better than the average MSE of the simulated image. The improvement in MSE available by including additional input frames hits a maximum of approximately 25% of full scale at N=14. Neither the MSE nor the standard deviation improves significantly as additional input images are added to the stack. As a consequence of the results of the Zernike term sweeps discussed above, the estimator was run using 60 Zernike coefficients to characterize the PSFs. The results are compared in Fig. 9. The use of additional Zernike terms does not incur as large a computational penalty as that associated with adding additional frames but each reconstruction will take longer. The incremental improvement in MSE is not worth the additional processing time.

Fig. 8

Case 2 sample images. Compares the (a) diffraction-limited image with (b) the single sample image and (c) a sample reconstructed object.

OE_52_8_083108_f008.png

Fig. 9

MSE versus number of frames. Case 2 moderate turbulence CN2=3.75×1014 with 30 and 60 Zernike terms.

OE_52_8_083108_f009.png

4.1.3.

Case 3 severe condition CN2=5.25×1014m(2/3)

Again we see in Fig. 10 that the estimator requires at least two input frames to reliably produce an estimate of the object that has a lower MSE than 1165, the average simulated image’s MSE. At N=2 and thereafter the estimator can be expected to perform better than the average MSE of the recorded image. The improvement in MSE available by including additional input frames hits a maximum of approximately 36% of full scale at N=14 and neither the MSE nor the standard deviation improves significantly as additional input images are added to the processing stack. Figure 11 shows (a) the diffraction limited image, (b) a sample recorded image, and (c) a sample reconstructed object. Based on the results of the Zernike term sweeps discussed above, the estimator was run using 60 Zernike coefficients to characterize the PSFs. The results are compared in Fig. 10. The use of additional Zernike terms does not incur as large a computational penalty as that associated with adding additional frames but each reconstruction will take longer. As shown, the incremental improvement in MSE is not worth the additional processing time.

Fig. 10

MSE versus number of frames. Case 3 severe turbulence CN2=5.25×1014 with 30 and 60 Zernike terms.

OE_52_8_083108_f010.png

Fig. 11

Case 3 sample images. Compares the (a) diffraction-limited image with (b) the single sample image and (c) a sample reconstructed object.

OE_52_8_083108_f011.png

4.2.

Poisson Noise Model Mean Photon Rate 2×106

Each set of 1000 turbulent images representing the three turbulence cases was used to generate a set of speckle images with a mean photon count per image of 2×106. Each set of images was processed using the MFBD methods described above using the cost function and gradient described in Eqs. (15)–(17).

4.2.1.

Case 1 low condition CN2=2.25×1014m(2/3)

Examining Fig. 12, we see that on average, MFBD performance is less than the input images until 50 input frames are used in each reconstruction. At N=50 and thereafter the estimator can be expected to produce an estimate that has a value lower than 2095, the average MSE of the images in the simulated data set. Marginal improvement in MSE continues as additional frames are added to the image stack reaching a maximum of about 38% improvement over the average MSE across the data set. However, if processing time is not of consequence, the MSE and its standard deviation continues to improve as additional images are added to the stack of images presented to the estimator, so additional improvement in the quality of the image is available. The diffraction limited image is compared to a sample simulated turbulent image and a sample reconstructed object as shown in Fig. 13.

Fig. 12

MSE versus number of frames. Poisson Case 1 Low turbulence CN2=2.25×1014, mean photon rate 2×106.

OE_52_8_083108_f012.png

Fig. 13

Case 1 sample images, mean photon rate 2×106, CN2=2.25×1014m(2/3). Compares the (a) diffraction-limited image with (b) a single sample image and (c) a sample reconstructed object.

OE_52_8_083108_f013.png

4.2.2.

Case 2 moderate condition CN2=3.75×1014m(2/3)

Figure 14 shows (a) the diffraction limited image, (b) a sample recorded image, and (c) a sample reconstructed object using 175 frames to estimate the object. Again we see in Fig. 15 that the MFBD estimator will not perform on average any better than 2285, the average simulated turbulent image error, until the number of images processed reaches 50 frames. At N=50 and thereafter the estimator can be expected to perform better than the average MSE of the simulated image reaching a maximum of about 34% improvement. The marginal improvement in MSE available by including additional input frames begins to decline at about N=175 and neither the MSE nor the standard deviation improves significantly as additional input images are processed.

Fig. 14

Case 2 sample images, mean photon rate 2×106, CN2=3.75×1014m(2/3). Compares the (a) diffraction-limited image with (b) a single sample image and (c) a sample reconstructed object.

OE_52_8_083108_f014.png

Fig. 15

MSE versus number of frames. Poisson case 2 moderate turbulence CN2=3.75×1014m(2/3), mean photon rate 2×106.

OE_52_8_083108_f015.png

4.2.3.

Case 3 severe condition CN2=5.25×1014m(2/3)

Again we see in Fig. 16 that the MFBD estimator will not perform on average any better than the average image error until the number of images offered to the estimator reaches 50 frames. At N=50 and thereafter the estimator can be expected to perform better than the average MSE (2456) of the simulated image reaching a maximum of about 33%. The marginal improvement in MSE available by including additional input frames begins to decline at about N=175 and neither the MSE nor the standard deviation seems to improve significantly from there as additional input images are added to the stack. Figure 17 shows (a) the diffraction limited image, (b) a sample recorded image, and (c) a sample reconstructed object.

Fig. 16

MSE versus number of frames. Poisson case 3 severe turbulence CN2=5.25×1014, mean photon rate 2×106.

OE_52_8_083108_f016.png

Fig. 17

Case 3 sample images, mean photon rate 2×106, CN2=5.25×1014m(2/3). Compares the (a) diffraction-limited image with (b) a single sample image and (c) a sample reconstructed object.

OE_52_8_083108_f017.png

5.

Conclusions

The performance of an unconstrained optimization-based MFBD estimator was evaluated in terms of the MSE between the reconstructed object and a diffraction limited image. Three 1000-image data sets of a single image distorted by low, moderate, and severe turbulence were generated using a horizontal imaging simulator that includes anisoplanatic effects. The data sets were then applied to the estimator and its MSE performance evaluated. If a hardware implementation was to be produced with a fixed, or limited set of operator options, a wide variety of turbulence cases would be well served by a selection of 14 images and 30 polynomial terms for use with the estimator. Point performance estimates, using a data set of 1000 simulated turbulence corrupted images, indicate that the algorithm is capable of producing 40%, 25%, and 36% improvements in MSE for low, moderate, and severe-anisoplanitic turbulence cases, respectively, under the assumption that the phase errors can be characterized as a Gaussian distribution. For all simulated turbulence cases, significant reductions were observed with as few as two input images. For the Poisson case, significant results were achieved with as few as 50 frames, but 175 frames would be a reasonable place to design a system that would be able to cope with a variety of atmospheric turbulence and light levels. For further research, it may be possible to speed up the reconstruction by providing a better initial guess at the object. Simulated annealing techniques could be used to perturb the estimate away from a local minimum and may prove to be an effective answer to local minimum trapping.

References

1. 

R. J. Noll, “Zernike polynomials and atmospheric turbulence,” J. Opt. Soc. Am., 66 (3), 207 –211 (1976). http://dx.doi.org/10.1364/JOSA.66.000207 JOSAAH 0030-3941 Google Scholar

2. 

R. A. GonsalvesR. Chidlaw, “Wavefront sensing by phase retrieval,” Proc. SPIE, 207 32 –39 (1979). http://dx.doi.org/10.1117/12.958223 PSISDG 0277-786X Google Scholar

3. 

B. J. Thelenet al., “Maximum a posteriori estimation of fixed aberrations, dynamic aberrations, and the object from phase-diverse speckle data,” J. Opt. Soc. Am. A, 16 (5), 1016 –1025 (1999). http://dx.doi.org/10.1364/JOSAA.16.001016 JOAOD6 0740-3232 Google Scholar

4. 

B. J. Thelenet al., “Overcoming turbulence-induced space-variant blur by using phase-diverse speckle,” J. Opt. Soc. Am. A, 26 (1), 206 –218 (2009). http://dx.doi.org/10.1364/JOSAA.26.000206 JOAOD6 0740-3232 Google Scholar

5. 

R. G. PaxmanT. J. SchulzJ. R. Fienup, “Joint estimation of object and aberrations by using phase diversity,” J. Opt. Soc. Am. A, 9 1072 –1085 (1992). http://dx.doi.org/10.1364/JOSAA.9.001072 JOAOD6 0740-3232 Google Scholar

6. 

D. FraserG. ThorpeA. Lambert, “Atmospheric turbulence visualization with wide-area motion-blur restoration,” J. Opt. Soc. Am. A, 16 (7), 1751 –1758 (1999). http://dx.doi.org/10.1364/JOSAA.16.001751 JOAOD6 0740-3232 Google Scholar

7. 

G. R. AyersJ. C. Carhart, “Iterative blind deconvolution method and its applications,” Opt. Lett., 13 (7), 547 –549 (1988). http://dx.doi.org/10.1364/OL.13.000547 OPLEDP 0146-9592 Google Scholar

8. 

T. J. Schulz, “Multiframe blind deconvolution of astronomical images,” J. Opt. Soc. Am. A, 10 (5), 1064 –1073 (1993). http://dx.doi.org/10.1364/JOSAA.10.001064 JOAOD6 0740-3232 Google Scholar

9. 

M. A. VorontsovG. W. Carhart, “Anisoplanatic imaging through turbulent media: image recovery by local information fusion from a set of short-exposure images,” J. Opt. Soc. Am. A, 18 (6), 1312 –1324 (2001). http://dx.doi.org/10.1364/JOSAA.18.001312 JOAOD6 0740-3232 Google Scholar

10. 

J. P. BosM. C. Roggemann, “Mean squared error performance of speckle-imaging using the bispectrum in horizontal imaging applications,” Proc. SPIE, 8056 805603 (2011). http://dx.doi.org/10.1117/12.884093 Google Scholar

11. 

J. P. BosM. C. Roggemann, “Technique for simulating anisoplanatic image formation over long horizontal paths,” Opt. Eng., 51 (10), 101704 (2012). http://dx.doi.org/10.1117/1.OE.51.10.101704 OPEGAR 0091-3286 Google Scholar

12. 

C. J. Carrano, “Speckle imaging over horizontal paths,” Proc. SPIE, 4825 109 –120 (2002). http://dx.doi.org/10.1117/12.453519 PSISDG 0277-786X Google Scholar

13. 

J. W. Goodman, Introduction to Fourier Optics, McGraw Hill, Boston, Massachusets (1996). Google Scholar

14. 

D. L. Frieds, “Limiting resolution looking down through the atmosphere,” J. Opt. Soc. Am., 56 (10), 1380 –1384 (1966). http://dx.doi.org/10.1364/JOSA.56.001380 JOSAAH 0030-3941 Google Scholar

15. 

J. C. Wyngaardet al., “Behavior of the refractive-index-structure parameter near the ground,” J. Opt. Soc. Am., 61 (12), 1646 –1650 (1971). http://dx.doi.org/10.1364/JOSA.61.001646 JOSAAH 0030-3941 Google Scholar

16. 

F. Harris, “On the use of windows for harmonic analysis with the discrete fourier transform,” Proc. IEEE, 66 51 –83 (1978). http://dx.doi.org/10.1109/PROC.1978.10837 IEEPAD 0018-9219 Google Scholar

17. 

G. E. ArcherJ. P. BosM. C. Roggemann, “Mean squared error performance of MFBD nonlinear scene reconstruction using speckle imaging in horizontal imaging applications,” Proc. SPIE, 8399 83990Q (2012). http://dx.doi.org/10.1117/12.920801 Google Scholar

Biography

OE_52_8_083108_d001.png

Glen E. Archer is pursuing a PhD in electrical engineering from Michigan Technological University. He received his BS in electrical engineering from Texas Tech University in 1986 and received a commission in the United States Air Force. In 2001, he retired from the Air Force to take a position in the Electrical and Computer Engineering Department at Michigan Tech, where he currently serves as a principal lecturer and the associate chair. His research interests include image processing and engineering education.

OE_52_8_083108_d002.png

Jeremy P. Bos received his PhD in electrical and computer engineering from Michigan Technological University in August 2012. Before returning to pursue his doctorate at Michigan Tech, he worked in the automotive and defense industries for nearly 10 years. During this time, he became a professional engineer (2006) and earned his MS in electrical engineering from Villanova University (2003). He received his BS in electrical engineering from Michigan Technological University in 2000. He is currently a postdoctoral fellow at the Air Force Research Laboratory under the National Research Council’s Research Associateship Program. His research interests are in the areas of atmospheric optics, image processing and machine intelligence.

OE_52_8_083108_d003.png

Michael C. Roggemann is a professor of electrical engineering at Michigan Tech. He is coauthor of the book “Imaging Through Turbulence” and has authored or coauthored over 60 journal articles and over 50 conference papers. He is a member of the IEEE, and is a fellow of both the Optical Society of America and SPIE. He was also briefly affiliated with the Boeing Corporation, where he served as a senior research scientist from 2002 to 2005; he was also a technical fellow at Boeing. He was an associate professor of engineering physics at the Air Force Institute of Technology, Wright-Patterson AFB, in Ohio from 1992 to 1997. He is an honorably retired Air Force Officer at the rank of major.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Glen Archer, Jeremy P. Bos, and Michael C. Roggemann "Reconstruction of long horizontal-path images under anisoplanatic conditions using multiframe blind deconvolution," Optical Engineering 52(8), 083108 (23 August 2013). https://doi.org/10.1117/1.OE.52.8.083108
Published: 23 August 2013
Lens.org Logo
CITATIONS
Cited by 6 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Turbulence

Image processing

Point spread functions

Imaging systems

Photons

Deconvolution

Diffraction

Back to Top