## 1.

## Introduction

A refinement of the Static Scene Statistical Nonuniformity Correction (S3NUC) method and a method of leveraging the nonuniformity correction for radiometric calibration are developed and tested in this research. The refinement of S3NUC to the Statistically Applied Nonuniformity Correction (SANUC) algorithm drastically decreases the processing time requirements for low photocount NUC. Simultaneously, the algorithmic refinement allows radiometric quantification of the data in terms of photoelectron count without substantial additional computational burden.

The S3NUC method computes the gain and bias of CCD pixels based on linear gain and bias assumptions. While it is known that photodetectors do not exhibit a linear response,^{1} there are many examples of successfully modeling them such as Refs. 23.–4 when input signals are held over a small enough range, and these assumptions are retained in the refined SANUC method. The innovative feature of the S3NUC algorithm is the employment of higher-order statistical moments to negate the requirement for uniform calibrated targets or statistically rich datasets. Instead, it requires two sufficiently static datasets of the same scene at different integration times.^{5} However, as originally implemented, the S3NUC algorithm can require thousands of frames to produce sufficiently accurate gain and bias estimates, due explicitly to the need to produce accurate estimates of higher-order moments. It is, therefore, desirable to improve the S3NUC algorithm’s accuracy and calculation speed while reducing its data requirements. To accomplish this, the method is modified to preclude the need for higher-order statistical moments, which in turn reduces the requirement for large frame counts to achieve satisfactory calibration. Other techniques have been introduced for achieving nonuniformity correction,^{2}^{–}^{4} but these methods do not allow for absolute radiometry to be achieved without the addition of a calibrated light source. They allow for the pixel-to-pixel differences in photodetector response to be removed but do not provide an estimate of the gain of the system that allows the true number of photons to be determined from the detector measurement. This research is concerned with demonstrating the absolute radiometry without the use of calibrated light sources. For this reason, the nonuniformity correction capability of the technique is not investigated or compared to other nonuniformity correction algorithms that do not achieve absolute radiometry without the use of a calibrated light source.

## 2.

## S3NUC Method

The S3NUC method models the data reported by a photodetector as a linear combination of random processes. The method requires two different, statistically independent datasets, ${D}_{1}$ and ${D}_{2}$, encoded by the photodetector according to the following equation:

## Eq. (1)

$${D}_{1}(i,j,k)=G(i,j){K}_{1}(i,j,k)+B(i,j)+{n}_{1}(i,j,k)\phantom{\rule{0ex}{0ex}}{D}_{2}(i,j,k)=G(i,j)({K}_{2}(i,j,k)+\mathrm{\Delta}K(i,j,k))+B(i,j)+{n}_{2}(i,j,k).$$In this equation, photon inputs ${K}_{1}$, ${K}_{2}$, and $\mathrm{\Delta}K$ are multiplied by some gain $G$, with additive bias, $B$, and readout noises, ${n}_{1}$ and ${n}_{2}$, on a two-dimensional pixel array index $i$, $j$ and frame number, $k$, basis. The variable $\mathrm{\Delta}K$ represents an unknown difference in the photocount input between the two sets of data ${K}_{1}$ and ${K}_{2}$. The data from each pixel are considered random variables and are assumed to be independent of the data in all other pixels in the array. The initially unknown $G$ and $B$ values are considered to be constant in time, while ${K}_{1}$, ${K}_{2}$, and $\mathrm{\Delta}K$ are assumed to be statistically independent Poisson random variables, hence the need to differentiate ${K}_{1}$ and ${K}_{2}$. Similarly, the zero mean additive white Gaussian noises (AWGN) terms ${n}_{1}$ and ${n}_{2}$ are assumed to be statistically independent. These definitions of the underlying model terms maintain the statistical independence of the datasets ${D}_{1}$ and ${D}_{2}$.

The moments of the functions in Eq. (1) are found to generate a system of equations for solution. The AWGN ${n}_{1}$ and ${n}_{2}$ terms are zero mean, and the means of the first and second datasets, ${\overline{D}}_{1}(i,j)$ and ${\overline{D}}_{2}(i,j)$, variances, ${\sigma}_{{D}_{1}}^{2}(i,j)$ and ${\sigma}_{{D}_{2}}^{2}(i,j)$, as well as third moments, ${\gamma}_{{D}_{1}}(i,j)$ and ${\gamma}_{{D}_{2}}(i,j)$, respectively, are calculated using the Poisson moment theorem.^{6} This theorem allows all higher-order moments of a Poisson random variable to be calculated solely from the mean. The separate Poisson random variables ${K}_{1}$ and ${K}_{2}$ are constrained to share a common mean value $\overline{K}$, to produce Eqs. (2)–(7) via appropriate algebraic substitutions

## Eq. (3)

$${\overline{D}}_{2}(i,j)=G(i,j)[\overline{K}(i,j)+\mathrm{\Delta}\overline{K}(i,j)]+B(i,j),$$## Eq. (4)

$${\sigma}_{{D}_{1}}^{2}(i,j)=E\{{[{D}_{1}(i,j)-{\overline{D}}_{1}(i,j)]}^{2}\}={G}^{2}(i,j)\overline{K}(i,j)+{\sigma}_{n}^{2},$$## Eq. (5)

$${\sigma}_{{D}_{2}}^{2}(i,j)={G}^{2}(i,j)[\overline{K}(i,j)+\mathrm{\Delta}\overline{K}(i,j)]+{\sigma}_{n}^{2},$$## Eq. (6)

$${\gamma}_{{D}_{1}}(i,j)=\frac{E\{{[{D}_{1}(i,j)-{\overline{D}}_{1}(i,j)]}^{3}\}}{{\sigma}_{{D}_{1}}^{3}(i,j)}=\frac{{G}^{3}(i,j)\overline{K}(i,j)}{{\sigma}_{{D}_{1}}^{3}(i,j)},$$## Eq. (7)

$${\gamma}_{{D}_{2}}(i,j)=\frac{{G}^{3}(i,j)[\overline{K}(i,j)+\mathrm{\Delta}\overline{K}(i,j)]}{{\sigma}_{{D}_{2}}^{3}(i,j)}.$$The result is a system of six equations in five unknowns, instead of three equations in four unknowns.^{5} The overdetermined system is then algebraically reduced to the set of equation shown in Eqs. (8)–(12), which may be solved for the desired system parameters $G$ and $B$ and the intermediate residual values $\overline{K}$, $\mathrm{\Delta}\overline{K}$, and ${\sigma}_{n}^{2}$

## Eq. (8)

$$G(i,j)=\frac{{\sigma}_{{D}_{2}}^{2}(i,j)-{\sigma}_{{D}_{1}}^{2}(i,j)}{{\overline{D}}_{2}(i,j)-{\overline{D}}_{1}(i,j)},$$## Eq. (9)

$$\overline{K}(i,j)=\frac{{\gamma}_{{D}_{1}}(i,j){\sigma}_{{D}_{1}}^{3}(i,j)}{{G}^{3}(i,j)},$$## 3.

## SANUC Method

The SANUC algorithm represents a special case and specific modification of the S3NUC approach. If $\mathrm{\Delta}\overline{K}(i,j)$ is selected such that the change in the average number of photons between the datasets is equal to $\overline{K}(i,j)$, the average number of photons expected in the first dataset, then the moments reported in Eqs. (2)–(5) reduce to the results shown in Eqs. (13)–(16). In practice, this can be readily accomplished by doubling the integration time of the sensor being used to gather the data

## Eq. (15)

$${\sigma}_{{D}_{1}}^{2}(i,j)=E\{{[{D}_{1}(i,j)-{\overline{D}}_{1}(i,j)]}^{2}\}={G}^{2}(i,j)\overline{K}(i,j)+{\sigma}_{n}^{2},$$This modification, implemented by controlling the measurement parameters, reduces the overdetermined S3NUC system of equations, which included an explicit $\mathrm{\Delta}\overline{K}$ term, to a fully determined system of equations in only four unknowns.

The system is then algebraically reduced to the set of equation shown in Eqs. (17)–(20) and may be solved for the desired system parameters $G$ and $B$ and the intermediate residual values $\overline{K}$ and ${\sigma}_{n}^{2}$. In Eq. (17), the solution for the gain, $G$, is identical to that produced by the S3NUC approach

## Eq. (17)

$$G(i,j)=\frac{{\sigma}_{{D}_{2}}^{2}(i,j)-{\sigma}_{{D}_{1}}^{2}(i,j)}{{\overline{D}}_{2}(i,j)-{\overline{D}}_{1}(i,j)}.$$The solution for $\overline{K}$ is shown in Eq. (18) and differs from the S3NUC solution in that it depends only on first moments of the measured data rather than the skewness of the data. This significantly reduces the uncertainty, since estimates of first moments are always more reliable than sample estimates of third moments, and in practice drastically reduces the number of data frames necessary to achieve an accurate estimate of the bias, $B$

The calculation of the bias, $B$, is carried out in the same way as the S3NUC solution. However, since the estimate of $\overline{K}$ is more robust in the SANUC method, the estimate of $B$ is also more accurate than the corresponding S3NUC estimate for lower numbers of data frames

Finally, Eq. (20) shows that the readout noise variance is again computed in the same way that it is in the S3NUC approach, but again since an estimate of $\overline{K}$ is utilized, the SANUC solution has significantly lower variance for lower numbers of data frames

The SANUC method is first validated by directly comparing estimates of the unknown variables gain, $G$, and bias, $B$, to known simulated data. To generate each of an arbitrary number of $100\times 100\text{\hspace{0.17em}\hspace{0.17em}}\text{pixel}$ frames of data, a gain, $G$, is set for each pixel based on a Gaussian distribution with mean 1 and standard deviation 0.05. Similarly, a bias, $B$, is set for each pixel based on a Gaussian distribution with mean 100 and standard deviation 1. A fixed $\mathrm{\Delta}\overline{K}$ equal to $\overline{K}$ is set for all pixels and frames, based on the requirements of Eq. (14). The $\overline{K}$ parameter is used to generate the photonic input in each pixel, in each frame, $K$, by realizing a Poisson random variable with $\overline{K}$ as the mean. The additional additive noise terms in each pixel, ${n}_{\mathrm{1,2}}$, are taken from a draw from Gaussian distribution with zero mean, unity variance, and an amplitude weighting of 10, independently for each frame. This test simulation is looped over a varying number of frames to assess the impact of frame count on the gain and bias estimates. Frames in the range of 1 to 500 are chosen in increments of 10.

In Figs. 1 and 2, data points are spaced every 10 frames, with error bars showing one standard deviation. The results shown in Fig. 1 clearly show that the error in $G$ follows the same curve in both S3NUC and SANUC. This is as expected because the equation to recover the gain is the same in both algorithms.

In Fig. 2, the error of the bias between S3NUC and SANUC shows a stark difference in results. S3NUC demonstrates much higher error than SANUC at low frame count numbers. SANUC also demonstrates comparatively flat error standard deviation. The higher error in S3NUC is attributed to the cascading error in the bias estimate.

Figure 3 shows a similar trend to Fig. 2 in the differences between S3NUC and SANUC. As with the bias, the S3NUC method relies on the recovered gain to calculate the readout noise variance, while SANUC still only uses the statistics of the data itself, negating a cascading error.

It is readily concluded that the SANUC method substantially outperforms the original S3NUC method for low frame counts, producing higher accuracy results with drastically lower frame count requirements. Additionally, the improved reliability of the higher-order variables makes further data exploitation tractable.

## 4.

## Demonstration of the Radiometric Accuracy of the SANUC Algorithm

The intermediate outputs $\overline{K}$ and $\mathrm{\Delta}\overline{K}$ from S3NUC, Eqs. (9) and (11), and $\overline{K}$̅ from SANUC, Eq. (18), represent estimates of the number of photons received by a photodetector. In both cases, the accuracy of this estimate is primarily dependent upon the accuracy of the gain estimate. However, due to the overall advantages of SANUC for nonuniformity correction, the radiometric accuracy of the $\overline{K}$ estimate is only assessed against the output of a known light source for the SANUC method. Figure 4 shows the simple imaging system, observing a light source with a known power output, which was used to assess the radiometric accuracy of this method.

Using well-established radiometry techniques,^{7} the expected number of photons received by the sensor from a well-characterized light source and optical train can be calculated analytically as

## Eq. (21)

$$E[K]=\frac{{I}_{p}{r}^{2}{P}_{t}\mathrm{\Delta}t}{{I}_{\mathrm{avg}}hv{[\mathrm{tan}({\theta}_{t})R]}^{2}},$$## Table 1

Technical specifications for the Thor Labs LED555L laser diode.8

Specification | Min | Typical | Max |
---|---|---|---|

Forward voltage at 50 mA (V) | — | 3.5 | 4 |

Continuous operating current (mA) | — | 20 | 30 |

Optical output power 50 mA (mW) | — | 1 | — |

Viewing half-angle (deg) | — | 20 | — |

Peak wavelength (nm) | 545 | 555 | 565 |

Bandwidth FWHM (nm) | — | 40 | — |

The diode does not produce a uniform illumination pattern. Figure 5 shows the normalized illumination pattern as a function of viewing angle measured by the manufacturer. Numerically integrating this pattern over the $x$-axis yields a value of 0.5. This makes the average illumination one-half the maximum. Since Eq. (21) computes the number of photons expected for a source that would equally distribute its photons over the range from $-20\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{deg}$ to 20 deg, the theoretical number of photons shown in Table 2 is adjusted upward by a factor of 2 since the 200-mm aperture is placed directly at center of the maximum intensity in the experiment. The aperture itself has a viewing angle of 0.15 deg in the region around the center of the diode light beam. This implies that over the entire aperture, twice as many photons should be measured as what would be calculated by Eq. (21).

## Table 2

SANUC estimates and radiometry results for fully bright LED.

Background gain estimate | 661.8228 |

Background bias estimate | 42.46 |

Spot gain estimate | 83.5506 |

Spot bias estimate | 110.56 |

Theoretical photons | $2.2406\times {10}^{6}$ |

Theoretical lower bound | $2.4745\times {10}^{6}$ |

Theoretical upper bound | $2.0234\times {10}^{6}$ |

SANUC estimated photons | $2.1844\times {10}^{6}$ |

Absolute error | 2.7% |

Images of the source are captured by an AVT Stingray F-504B camera through a $200\text{-}\mu \mathrm{m}$-diameter pinhole and 18-cm focal length lens, illuminated from a distance of 4.33 m. This camera features a monochrome CCD sensor with a $2452\text{\hspace{0.17em}\hspace{0.17em}}(h)\times 2056\text{\hspace{0.17em}\hspace{0.17em}}(v)$ image-output resolution and pixel dimensions on the CCD sensor measuring $3.45\text{\hspace{0.17em}\hspace{0.17em}}\mu \mathrm{m}\times 3.45\text{\hspace{0.17em}\hspace{0.17em}}\mu \mathrm{m}$.^{9} The image of the pinhole illuminates an area $\sim 200\text{\hspace{0.17em}\hspace{0.17em}}\text{pixels}$ in diameter, which allows for a sufficient set of independent pixel measurements for statistical analysis. The base exposure time was set at 100 ms and the software driven camera gain to 20 dB.

Both the output of the radiometry equation using the source specifications and the magnitudes reported in the images are in units of photoelectrons, and the radiometry equation accounts for all of the photons from the light source, which enter the entire aperture. After the image is corrected by removing the gain and bias via the general SANUC method, the corrected image in $\overline{K}$ is simply summed to collect the entire photoelectron count. The total photoelectron count is then divided by the quantum efficiency of the detector to convert the number of photoelectrons into photons. Table 2 details the results of using SANUC for a radiometry estimate on the average of 100 frames of the fully bright LED light source. The experimental results agree with the theoretical radiometry calculation with only 2.7% absolute error from the nominal computed value for theoretical photons shown in Table 2. Table 2 also shows upper and lower bounds for the theoretical photon value. These bounds are computed using extreme values for the parameters shown in Table 3 used in Eq. (21). The extreme values for $R$ are obtained by estimating up to 5 mm of error in using the measuring tape. The extreme values for the diode wavelength are obtained from the manufacturer. The extreme values for the diode current are obtained from the estimated accuracy of the current meter used to measure the diode current. It is expected that the diode optical power should be linearly related to the diode current. The ratio of peak to average beam intensity is obtained using minimum and maximum values on each of the Riemann sum values obtained from the plot of the measured beam intensity shown in Fig. 5. The Riemann sum was computed by sampling that plot in increments of 5 deg. The mean value over each subinterval was used to compute the average irradiance in the pattern. The peak of the pattern was then divided by that average to obtain the nominal value of the ratio. The maximum values on each subinterval were used to compute the minimum value of the ratio, and the minimum value on each subinterval was used to compute the maximum of the ratio. The upper and lower bounds on the integration time were estimated based on the number of digits available for controlling the integration time in the camera software.

## Table 3

Parameter values used to compute upper and lower bounds for photons in Eq. (21).

Parameter | Value used in lower bound calculation | Value used in upper bound calculation |
---|---|---|

$R$ (diode to camera aperture distance) (m) | 4.335 | 4.325 |

Diode wavelength (nm) | 545 | 565 |

Diode optical power (mW) | 0.95 | 1.05 |

${I}_{p}/I{\text{\hspace{0.17em}}}_{\mathrm{avg}}$ (ratio of peak radiance to average) | 1.96 | 2.041 |

Integration time (ms) | 99.9 | 100.1 |

## 5.

## Conclusions

The S3NUC method was refined to the SANUC method, drastically reducing the error in the bias and readout noise variance estimates, as demonstrated in simulated data in Figs. 2 and 3. The resulting refined SANUC method exhibits orders of magnitude better performance than the original S3NUC method for calibrations taken with $\sim 100$ frames of data, improving it suitability for *in-situ* operations and radiometric calibration. The radiometric estimates of the number of photons collected from the diode by the camera in the experiment described are in good agreement, $<2.7\%$ absolute error with those predicted by radiometry, as shown in Table 2. The accuracy of the theoretical photocount is bounded by the upper and lower photon calculations. The estimated number of photons from the SANUC algorithm falls within these bounds showing that the method produces a photocount estimate consistent with theoretical predictions. This result confirms that with as little as 100 frames of data, an accurate estimate of the number of photons from an optical source can be attained.

## References

**,” IEEE Trans. Image Process., 8 (8), 1148 –1151 (1999). http://dx.doi.org/10.1109/83.777098 IIPRE4 1057-7149 Google Scholar**

*Nonuniformity correction of infrared image sequences using the constant-statistics constraint***,” Appl. Opt., 38 (5), 772 –780 (1999). http://dx.doi.org/10.1364/AO.38.000772 APOPAI 0003-6935 Google Scholar**

*Statistical algorithm for nonuniformity correction in focal plane arrays***,” Appl. Opt., 39 (8), 1241 –1250 (2000). http://dx.doi.org/10.1364/AO.39.001241 APOPAI 0003-6935 Google Scholar**

*Scene-based nonuniformity correction with video sequences and registration***,” Opt. Eng., 54 (10), 104111 (2015). http://dx.doi.org/10.1117/1.OE.54.10.104111 Google Scholar**

*Static scene statistical algorithm for nonuniformity correction in focal-plane arrays*