Open Access
23 February 2019 Calibration and performance assessment of microgrid polarization cameras
Author Affiliations +
Abstract
We provide a method for calibrating microgrid polarization cameras that is simpler and easier to set up than existing methods. Applying this method to three different commercially available cameras, we compare the mean values and variances in their diattenuation and orientation properties. We derive formulas giving the accuracy with which the pixel polarization properties can be calibrated in both the Gaussian and Poisson noise regimes and demonstrate the statistical instability of the extinction ratio as a parameter. In a series of calibration measurements, we estimate the pixel-to-pixel variation of polarization properties and show how to separate the effects of temporal noise from manufacturing variation.

1.

Introduction

Polarization cameras are division of focal plane imaging polarimeters1 that use an array of micropolarizer filters aligned to the detector array pixels, typically with the micropolarizers oriented at angles 0 deg, 45 deg, 90 deg, and 135 deg. We provide a polarization camera calibration approach that is simpler than existing methods, and which does not require a motorized rotation stage or the use of highly uniform flat field illumination, such as that produced by an integrating sphere.27 Taking advantage of this calibration approach, we show how to separate the effects of temporal noise from manufacturing variation when measuring the camera—a separation that is essential if we wish to fairly compare one camera’s performance to another. As more polarization cameras become commercially available, it becomes increasingly important to have a practical and unbiased method for evaluating and comparing these cameras.

Polarization cameras have conventionally been assembled by manufacturing the detector array and the micropolarizer array separately, then aligning them to one another and fixing them in place. As a result, there is a small vertical displacement between the polarizer layer and the detection layer that allows for cross talk between neighboring pixels when the light is incident from nonzero angles of incidence (see Fig. 1).4 More recently, attempts have been made to manufacture the sensor and polarization filters together as part of an integrated process to minimize cross talk and improve alignment.8 From a user’s standpoint, reduced cross talk here will appear as an increase in the diattenuation value of the individual pixels.

Fig. 1

The detection layer of a polarization camera, with micropolarizers (a) attached above the sensor layer and (b) integrated into the sensor layer. Optical rays shown in blue are cross talk rays from one pixel to its neighbor.

OE_58_8_082408_f001.png

Micropolarizers use finely spaced wire grid patterns that can be difficult to manufacture, so that micropolarizer filters have historically had trouble achieving the polarization purity that monolithic polarization devices readily achieve. Although monolithic polarizers can readily achieve extinction ratios of better than 103, or even 106, micropolarizers have generally been limited to extinction ratios of <30.2,4 Moreover, previous reports in the literature have indicated that micropolarizer orientation accuracy can vary by as much as 0.5 deg from pixel to pixel, whereas mounts for monolithic polarizers can readily achieve angular orientation accuracy of better than 0.01 deg.4,9

After introducing our approach for calibrating microgrid polarization cameras, we show measurements obtained from three commercial polarization cameras. Using parameter variance formulas derived from a measurement model, we show how one can separate the effects of measurement noise from manufacturing variations, allowing for quantitative performance assessment and fair comparison among cameras. This procedure is demonstrated on three commercially available polarization cameras, showing the behavior of not just a select number of pixels in the center region of the image, but the entire set of pixels across the array.

2.

Calibration Procedure

A nonideal linear polarizer is a diattenuator defined by its maximum and minimum transmission values q and r as the diattenuator is rotated with respect to incident linearly polarized light. Thus, while an ideal polarizer achieves 100% transmission and 0% transmission at 0-deg and 90-deg orientation angles, the nonideal diattenuator will achieve q and r. The Mueller matrix for a diattenuator element with the q axis oriented at angle θ can be written as M(q,r,θ), as given in Eq. (1), where the diattenuation is D=(qr)/(q+r), and A=12(q+r) is the mean transmission. For a given diattenuator, it is possible to achieve an ideal diattenuation of D=1 even when the peak transmission of the polarizer is very low A1. The parameter A can thus be considered the “efficiency” of the polarizing element. The extinction ratio X is derived from the diattenuation as X=q/r=(1+D)/(1D).

Eq. (1)

Mld(q,r,α)=12(q+r(qr)cos(2α)(qr)sin(2α)0(qr)cos(2α)(q+r)cos2(2α)+2qrsin2(2α)12(q+r2qr)sin(4α)0(qr)sin(2α)12(q+r2qr)sin(4α)2qrcos2(2α)+(q+r)sin2(2α)00002qr)=A(1Dcos(2α)Dsin(2α)0Dcos(2α)1Dsin2(2α)14ADsin(4α)0Dsin(2α)14ADsin(4α)1Dcos2(2α)00001D).

To measure the diattenuation properties of a polarization camera, we generate linearly polarized light sequentially oriented at four different θ angles and measure the detected intensity at each of the four positions (see Fig. 2). Thus, the source light Stokes vector is ssrc=(Ip,0,0,0)T, and the Mueller matrix of the generating polarizer Mlp(θ), which together generate a fully polarized state, sin=Ip·(1,cosθ,sinθ,0)T for use in calibrating the camera pixels. The quantity Ip is the light flux in photons/sec incident on the given pixel of interest.

Fig. 2

Experimental layout for the calibration polarization state generator and the microgrid polarization camera. A diffuser is used to ensure that the light source is unpolarized. The inset at right shows a 2×2 section of the detector array.

OE_58_8_082408_f002.png

The behavior of a single pixel in the polarization camera can be modeled as a linear diattenuator Mld(q,r,α) followed by detection vector d=(η,0,0,0) with quantum efficiency η. Thus, the above generated state will be detected as

Eq. (2)

g=d·Mld(q,r,α)·Mlp(θ)·sin.

Setting the generating polarizer to orientations θ=0  deg, 45 deg, 90 deg, and 135 deg, we obtain four measurements:

Eq. (3)

g0=Ie[1+Dcos(2α)]+n0,

Eq. (4)

g45=Ie[1+Dsin(2α)]+n45,

Eq. (5)

g90=Ie[1Dcos(2α)]+n90,

Eq. (6)

g135=Ie[1Dsin(2α)]+n135,
where gθ represents the number of detected electrons, and nθ the number of noise electrons, for incident light polarized at angle θ. To work with both Gaussian and Poisson noise, we will assume that all measurements are scaled to photoelectron units—the same units as the noise terms nθ. Since the incident light level is

Eq. (7)

Ip=1ηAIe,
intensity Ie can be considered as the idealized photoelectron number that would be detected if the polarizer were removed (i.e., A=1). Since the detector element’s quantum efficiency η and the polarizing element efficiency A appear as a product, we can define ηext=Aη as the external quantum efficiency of the pixel. If the illumination is known to be uniform a priori, then it is possible to estimate ηext at each pixel to within an arbitrary constant. To measure the missing constant, however, it is necessary to obtain an independent measurement of Ip, such as with a radiometer.

Whereas previous calibration methods fitted pixel diattenuation parameters using images taken at a large number of different input polarization angles,4,5,10 the choice of the four angles (0 deg, 45 deg, 90 deg, and 135 deg) produces simple formulas for estimating the incident intensity Ie and the polarization properties at each pixel α, D, and X:

Eq. (8)

I^e=14(g0+g45+g90+g135),

Eq. (9)

α^=12arctan[(g45g135)/(g0g90)],

Eq. (10)

D^=2[(g0g90)2+(g45g135)2]1/2g0+g45+g90+g135,

Eq. (11)

X^=(1+D^)/(1D^).

From these equations, we can see that the estimates for the pixel polarization properties do not depend explicitly on the incident light intensity Ie. As a result, as long as the light intensity does not vary significantly from one pixel to its neighboring pixels, calibrating the camera does not require spatially uniform illumination. While differences in light level at the camera will produce differences in noise at each pixel, this can be made small in comparison to the differences produced by manufacturing variations.

3.

Estimating Parameter Variances

From Eqs. (8)–(11), we can estimate the polarization properties of a pixel from the calibration measurements gθ. It is also useful to assess how much individual pixels vary from the overall mean—the parameter nonuniformity. Differences in manufacturing process between, say, the center of the detector array and its edges, may result in the pixel diattenuation and orientation at the edge being different than at the center. If the manufacturing differences are small in comparison to the error induced by measurement noise, then we should be able to ignore them. Otherwise, we need to make sure to calibrate for the individual variations in pixel properties.

Deriving formulas for the parameter variances involves the application of the well-known expression for parameter variance: var(x^)=x^2x^2, where · indicates taking the mean value. To proceed, we insert the equation for the estimator x^ and solve. Starting with I^e, we obtain

Eq. (12)

I^e2=142(g0+g45+g90+g135)2=116(Ie[1+Dcos(2α)]+n0+Ie[1+Dsin(2α)]+n45+Ie[1Dcos(2α)]+n90+Ie[1Dsin(2α)]+n135)2=Ie2+116[n02+n452+n902+n1352],
where we have used the assumption that the noise terms are zero mean: nθ=0. This causes no difficulties for Poisson-distributed noise, since our definition of the measurement and noise (g and n) result in the mean value of the Poisson-distributed variable to be incorporated into g while n retains the zero-mean stochastic portion.

Since I^e=Ie, we can write that var(I^e)=I^e2Ie2. For uniformly distributed independent Gaussian (IG) noise, this gives

Eq. (13)

varIG(I^e)=14vg,
where vg is the Gaussian noise variance at each pixel. The 14 factor comes from the process of averaging over four measurements to obtain I^e. In the pure Poisson (PP) shot noise regime, we can substitute:
n02=var(n0)=g0=Ie[1+Dcos(2α)],
for each of the noise terms in Eq. (12), producing

Eq. (14)

varPP(I^e)=14Ie.

Next, we can calculate the variance of α^ by inserting Eq. (9) into the variance formula. While the result is a nonlinear equation, we can obtain a second-order power series representation by taking the Maclaurin series expansion in the noise variables nθ and extract the second-order terms.11,12 This produces a lengthy polynomial expression in the four noise variables, but if we assume that the noise terms are independent of one another, then the ensemble average of all mixed-noise terms (i.e., terms having n0n45 as factors) becomes zero. This greatly simplifies the expression. Finally, we substitute the IG noise model into the result, giving

Eq. (15)

varIG(α^)=vg8D2Ie2  (radians2),
or the PP noise model, giving

Eq. (16)

varPP(α^)=18D2Ie  (radians2).

A similar series-expansion procedure allows for calculating the variances of D^ and X^ as well:

Eq. (17)

varIG(D^)=(3D2+4)vg16Ie2,

Eq. (18)

varPP(D^)=4D216Ie,

Eq. (19)

varIG(X^)=(2D3+2D2+4D+1)vg4(1D)4DIe2,

Eq. (20)

varPP(X^)=D4+6D3+3D2+4D+14(1D)4DIe.

The parameter variance equations are summarized in Table 1.

Table 1

Formulas for the mean and variance of detector array polarization properties.

ParameterIG noise variancePP noise variance
var(I^e)14vg14Ie
var(α^)(18D2)vgIe2(18D2)1Ie
var(D^)(3D2+416)vgIe2(4D216)1Ie
var(X^)[2D3+2D2+4D+14(1D)4D]vgIe2[D4+6D3+3D2+4D+14(1D)4D]1Ie

4.

Experimental Results

To verify that we can characterize polarization cameras and separate manufacturing variability from noise in the results, we collected calibration data from three polarization cameras (“A,” “B,” and “C”) from three different manufacturers (Table 2). The experimental setup uses a linearly polarized incoherent light source, generated using a white light LED, diffuser, rotatable linear polarizer (Glan–Thompson type), and a narrowband spectral filter: 532±10  nm for cameras A and B, 520±20  nm for camera C (Fig. 2). The spectral filters are needed in order to make the camera comparison as fair as possible, since camera C uses a nonremovable filter in front of its detector array.

Table 2

The four polarization cameras measured.

Camera ACamera BCamera C
Pixels2464×20561200×18001164×874
Pixel size (μm)3.457.44.65
Frame rate (fps)9011020
Bit depth121212
Wavelength rangeVIS-NIRVIS-NIR520±20  nm

The initial datasets were collected by setting the generating polarizer to 0 deg and summing over many frames in order to reduce the measurement noise. The same procedure was then used for the 45 deg, 90 deg, and 135 deg orientations of the generating polarizer, and for the estimation of the background. After subtracting the background, the summed images were scaled from digital counts to photoelectron units using previous calibration of the cameras’ gain values. The radiometric gain values used for the four cameras were 2.2, 8.0, and 1.3 photoelectrons per digital count for cameras A, B, and C, respectively. Taking the sum of the images, rather than mean as is more common, is done in order to maintain Poisson statistics in the data.

Unlike previous calibration methods, we do not use an integrating sphere in order to make the illumination field uniform, as exact uniformity is unnecessary for the parameter estimates. The main purpose of the diffuser in the experimental setup is to remove any residual polarization from the light source. With the diffuser in place, the estimated degree of linear polarization of the light source is measured to be <0.1%.

First, we sum together 2000 calibration images for the case of camera A, 100 images for cameras B and C, into a single “sum image.” With the sum image, we implement the parameter estimation Eqs. (8)–(11) at each pixel. Taking the histograms of the resulting parameter images, we calculate the mean and standard deviations of each parameter, giving the results shown in Figs. 3 and 4 and summarized in Table 3. Thus, these parameter histograms have a distribution determined by a mix of (temporal) noise at each pixel as well as pixel-to-pixel variation in polarization properties. Note that since there is an easy-to-adjust degree of freedom in the orientation angle, we have defined the mean of 0-deg micropolarizers’ orientation angle to be exactly zero, so that the remaining orientation angles are defined with respect to it.

Fig. 3

Spatial histogram results for the three cameras, calculated after summing 2000 frames for camera A, 100 frames for cameras B and C. Note that all of the orientation angles are adjusted so that the mean of α0 is exactly zero. The calculated mean at the top left of each subfigure is given for each of the four pixel types, followed by the measured standard deviation and the estimated noise-only standard deviation values [obtained from Eq. (16)] in parentheses, i.e., mean (meas_std) (est_std). Gaussian curves for the fitted histogram mean and standard deviations are shown as solid curves overlying the histograms.

OE_58_8_082408_f003.png

Fig. 4

Spatial histogram results for the three cameras, calculated after summing 2000 frames for camera A, 100 frames for cameras B and C. Gaussian curves for the histogram mean and standard deviations are shown as solid curves overlying the histograms. In camera A’s data, the histograms for D^ have been truncated to 1, and those X have been truncated at 2000 to prevent unphysical values. The four colors are coded to the four pixel orientation types: 0 deg, 45 deg, 90 deg, and 135 deg.

OE_58_8_082408_f004.png

Table 3

Measured pixel properties for the four polarization cameras obtained from the 2000-frame-sum images (camera A) or 100-frame-sum images (cameras B and C). Each entry shows the mean value of each parameter taken across the entire array, together with the temporal-noise-removed estimate of the spatial standard deviation (the variation due to differences in manufacture) in square brackets.

ParameterCamera ACamera BCamera C
α0 (deg)0 [0.11]0 [0.65]0 [0.36]
α45 (deg)44.96 [0.11]45.20 [0.52]41.75 [0.42]
α90 (deg)89.81 [0.13]89.50 [0.20]86.34 [0.34]
α135 (deg)135.10 [0.12]134.88 [0.23]125.70 [0.43]
D00.9927 [0.0025]0.8495 [0.0152]0.6621 [0.0082]
D450.9928 [0.0029]0.8495 [0.0152]0.6545 [0.0092]
D900.9894 [0.0031]0.8945 [0.0117]0.6984 [0.0067]
D1350.9913 [0.0025]0.8853 [0.0108]0.6010 [0.0123]

The measurement results show a substantial range of performance for parameter mean and spatial variation among the three measured cameras, especially in the estimated micropolarizer extinction ratios. However, it is important to keep in mind that each camera uses pixels of different sizes, with different quantum efficiencies, different readout electronics, and different integration times. Thus, the measurement variation alone is not in itself sufficient; for fair comparison, it is necessary to separate the variation due to measurement noise from variation due to manufacturing differences of the pixels.

Looking at the distributions for I^e in Fig. 4, we find that the measured standard deviations for I^e are far larger than the theoretical noise-only standard deviations, i.e., var(I^e)14I^e. This is an indication that the illumination is significantly nonuniform, or that there is significant pixel-to-pixel variation in the external quantum efficiency ηA [defined in Eq. (7)].

Looking at the histograms for D^ and X^ in Fig. 4, we can see that the two distributions are generally similar, but with the value and the width of the probability distribution pr(X^) increasing rapidly as D^ approaches 1. This close association of the two distributions is also evident in Eq. (20) for var(X^), where a factor of (1D)4 appears in the denominator. Whereas the variance for D^ is determined primarily by the factor of Ie in the denominator Eq. (18), we see that the variance for X^ will be amplified by (1D)4 while the mean value as given in Eq. (11) will be amplified by only (1D)1. If we write the signal-to-noise ratio for the estimate of the extinction coefficient, we obtain

SNR(X^)=mean(X^)var(X^)(1D)Ie1/2.

Thus, for a fixed number of measurement photoelectrons, the SNR declines as D approaches 1, and this behavior is the property we see exhibited most clearly in the histogram for camera A—the camera where the diattenuation is the highest.

The tail on the right hand side of the extinction ratio distribution pr(X^) moves rapidly to higher values as D comes closer to 1. This is a well-known property for “Gaussian ratio distributions”—in this case the ratio of (D+1) to (D1), for which the mean becomes undefined and the variance becomes infinite.13,14 While this behavior poses no serious problems for cameras B and C, camera A’s diattenuation is sufficiently close to 1 that after background subtraction a small fraction of pixels are left with zero or negative values when in the crossed-polarization condition. As a result, the tail of the distribution for D^ extends past 1, so that for these pixels the extinction coefficient becomes infinite (for D=1), then wraps around to negative infinity and approaches zero from the negative number side (for D>1). These values are unphysical, of course, so that in Fig. 4 we have truncated the distribution for D at 1, and the distribution for X at 1250. Only a small portion of the distribution tail lies above these extreme values, so that without truncation the data distribution mean will be dominated by these rare values and will produce mean and variance values that are highly unstable, just as theory predicts.

As a result of the non-Gaussian shape for pr(X^), the median and mode of the distribution are more useful summary metrics of the distribution than the mean, and distribution quantiles can be used in place of the variance in order to describe the width. In Table 4, we compare the extinction ratio mean, median, and mode for camera A. Whereas for cameras B and C these three metrics are almost the same, we can see that for the case of camera A the mean is biased toward high values by the long one-sided tail of the distribution. In this situation, the median is probably the most useful single metric for camera users to use in evaluating camera pixel properties. For example, if we take the mean of the diattenuation distribution prior to applying the equation for the extinction coefficient, X=(1+D)/(1D), we obtain a result (column 4 in Table 4) that closely approximates the median value.

Table 4

Extinction coefficient summary statistics for camera A, using the data shown in Fig. 4.

Pixel orientation (deg)Mean (X^)Median (X^)Mode (X^)1+⟨D^⟩1−⟨D^⟩
0327275236275
45342289246278
90208190172187
135254232216230

4.1.

Using Temporal Noise Estimates to Validate the Variance Estimates

To validate the variance formulas, we capture a long sequence of frames and analyze the behavior of the polarization parameters for individual pixels over time. This removes the effect of spatial variability so that only temporal noise is present. For cameras A and B, we collected a sequence of 2000 frames, while for camera C, we were only able to capture 750 frames during a single calibration period. At each individual frame, we calculate the pixel properties Ie, α, D, and X, and from the resulting set of 2000 (or 750) estimates, we calculate the parameter variance. This is shown as the first point at the upper left of the plots in Fig. 5, corresponding to N=1, where N is the number of frames summed before calculating the parameters.

Fig. 5

Temporal standard deviation of each parameter for a single pixel in each camera. The horizontal axis indicates the number N of frames summed prior to calculating the parameter standard deviation. The dots indicate measurements, whereas the curve indicates the theoretical standard deviation value calculated from the measured light level. Note that the camera A subfigure for X^ uses a semilogarithmic plot, while all of the other subfigures use a linear plot.

OE_58_8_082408_f005.png

Next, we take the same dataset and sum every pair of frames (i.e., N=2) before calculating the parameters. For a shot-noise-limited measurement, this is equivalent to doubling the integration time and thus collecting twice the number of photoelectrons. Calculating the variance for this new set of 1000 (or 375) parameter estimates, we obtain the second point shown in the plots of Fig. 5.

Following this procedure for increasing values of N, we simulate the effect of measuring with steadily improving SNR. Using I^e in place of Ie in each of the variance Eqs. (14)–(20), we plot the corresponding noise-only variances predicted from theory as a solid curve.

4.2.

Separating Spatial Variability from Temporal Noise

Although the temporal noise measurements of Fig. 5 show a close fit to predictions, we can also see that the measured variance of all the pixels in the image (Figs. 3 and 4) are much larger than the values predicted from the variance formulas. This is an indication that pixel-to-pixel (deterministic) variability is dominating the measured variation, not temporal (stochastic) noise. To confirm this conclusion, we show a single row of pixels from each of the calibration images (Fig. 6), selecting every other pixel in order to avoid issues of differences between micropolarizer orientations. Here, we see that the α^, D^, and X^ variances are dominated by a broad systematic variation across the row rather than by uncorrelated noise.

Fig. 6

Parameter estimates for a row through the camera images, calculated after summing 2000 frames for camera A, 100 frames for cameras B and C. The inset numbers give the measured mean and standard deviation, and the theoretical noise-only standard deviation, of the row data.

OE_58_8_082408_f006.png

With the effects of stochastic and deterministic variation now clear, we can use our variance formulas to remove the stochastic portion from a measured variance value, leaving only the spatial pixel-to-pixel differences. If we assume that the two sources of variation are Gaussian-distributed and uncorrelated, then their combined effect is given by a convolution of the two distributions: for stochastic variables x and y, the variance of the sum z=x+y is given as

var(z)=var(x)+var(y).

Therefore, if we know var(z) and var(y) and want to solve for the standard deviation of x, then

std(x)=var(z)var(y).

If we take the data for the estimated extinction coefficient α^0 from camera C in Fig. 3, then var(z)=[0.365]2 and var(y)=[0.065]2, so that std(x)=0.359. In this case, we see that the observed spatial variation is almost entirely due to pixel-to-pixel manufacturing differences rather than to random noise. This and the corresponding results for α^ and D^ for each of the four cameras is given in square brackets as the “manufacturing variation” in Table 3. By removing the stochastic component, we now have a direct means of comparing the spatial variability in the pixel polarization parameters.

5.

Conclusions

To assess the performance of a microgrid polarization camera, it is natural to look first for the mean angle of orientation and the mean extinction coefficient of each of the four orientations of detector array micropolarizers. The spatial variation in these parameters, however, is also an important concern: any calibration that does not estimate each pixel individually is subject to increasing error as the spatial variability increases. To eliminate this source of error, we can calibrate each pixel individually, but the inability to average over an ensemble of many samples means that the calibration measurements will have a much lower SNR. Using the variance formulas of Eqs. (14)–(20), we have a simple method of estimating parameter calibration accuracy, so that users can determine how many calibration frames of data are needed.

We have also shown that the most popular metric for quantifying the performance of polarizers, the mean of the extinction coefficient, becomes problematic when the polarizer approaches perfection. As the diattenuation value approaches 1, the tail of the distribution for X^ lengthens, so that the mean becomes biased and highly unstable. Unless the mean of the diattenuation distribution is many standard deviations below 1, i.e.,

Q=(1D^)/std(D^)1,
the noise will dominate, producing mean and variance estimates that are of little utility. As a result, the meaning of extinction ratios of 103, or even 106 reported in the literature can be unclear without knowing the conditions of the measurement. If these reported values are taken from the mean calculated from the data, then they are only useful if the experimenter has made sure that Q1, a condition that can be difficult to achieve in practice. For the calibration of camera A, we summed together 2000 frames of data and found that even this quantity of data (amounting to 95 GB) was not nearly sufficient to achieve this condition. Using our variance formulas, we can estimate that increasing the number of frames summed to about 2×105 (i.e., 9.5 TB) should be sufficient. In general, for high values of X, the median and mode are much more robust metrics than the mean, allowing them to be used at much more reasonable signal-to-noise ratios.

It is also important to keep in mind that the micropolarizer characteristics will, in general, depend on the wavelength and the incident angle of incidence (and therefore on the lens numerical aperture), so that a user may need to recalibrate the sensor for each spectrum at which it is to be used, and to a lesser extent, with each lens.15

The four cameras examined in Sec. 4 above show a wide spread in estimated performance, and it is natural to ask just how important it is to work with a camera that has pixels of extinction ratio 300 versus one with, say, an extinction ratio of 10. Tyo and Wei16 and Roussel et al.17 have shown that even for an extinction ratio as low as X=5, the SNR in the Stokes vector elements increases by a factor of only 2.7 relative to an ideal diattenuator (X=). This would seem to argue that there is little benefit to be had once the extinction ratio exceeds 10 or 20, but most researchers and engineers persist in pushing hard to get the highest X values. It seems likely that the reason for this gap between the theoretical value of high X and the practical value placed on it by users lies with the ease of calibration. For a polarimeter employing high extinction ratio elements, one can approximate the polarizers as ideal diattenuators without a heavy cost in error. A polarimeter with low extinction ratio elements, on the other hand, requires a high accuracy calibration, and the polarimetric estimation equations will need to be more complex in order to accommodate the nonideal diattenuation values.

Another place where the higher extinction ratio provides tangible benefits is for imaging polarimetry of natural outdoor scenes. In this situation, many pixels will be seeing light with a degree of polarization close to zero, so that the polarimetric SNR will be poor for measuring the spatially resolved angle of polarization.17 Using a higher extinction ratio will allow for seeing smaller polarization features above the noise level.

References

1. 

J. S. Tyo, C. F. LaCasse and B. M. Ratliff, “Total elimination of sampling errors in polarization imagery obtained with integrated microgrid polarimeters,” Opt. Lett., 34 3187 –3189 (2009). https://doi.org/10.1364/OL.34.003187 OPLEDP 0146-9592 Google Scholar

2. 

V. Gruev, R. Perkins and T. York, “CCD polarization imaging sensor with aluminum nanowire optical filters,” Opt. Express, 18 19087 –19094 (2010). https://doi.org/10.1364/OE.18.019087 OPEXFF 1094-4087 Google Scholar

3. 

M. Kulkarni and V. Gruev, “Integrated spectral-polarization imaging sensor with aluminum nanowire polarization filters,” Opt. Express, 20 22997 –23012 (2012). https://doi.org/10.1364/OE.20.022997 OPEXFF 1094-4087 Google Scholar

4. 

T. York and V. Gruev, “Characterization of a visible spectrum division-of-focal-plane polarimeter,” Appl. Opt., 51 5392 –5400 (2012). https://doi.org/10.1364/AO.51.005392 APOPAI 0003-6935 Google Scholar

5. 

S. B. Powell and V. Gruev, “Calibration methods for division-of-focal-plane polarimeters,” Opt. Express, 21 21039 –21055 (2013). https://doi.org/10.1364/OE.21.021039 OPEXFF 1094-4087 Google Scholar

6. 

Z. Chen, X. Wang and R. Liang, “Calibration method of microgrid polarimeters with image interpolation,” Appl. Opt., 54 995 –1001 (2015). https://doi.org/10.1364/AO.54.000995 APOPAI 0003-6935 Google Scholar

7. 

J. Zhang et al., “Non-uniformity correction for division of focal plane polarimeters with a calibration method,” Appl. Opt., 55 7236 –7240 (2016). https://doi.org/10.1364/AO.55.007236 APOPAI 0003-6935 Google Scholar

8. 

Y. Maruyama et al., “3.2-MP back-illuminated polarization image sensor with four-directional air-gap wire grid and 2.5-μm pixels,” IEEE Trans. Electron. Dev., 65 2544 –2551 (2018). https://doi.org/10.1109/TED.2018.2829190 IETDAI 0018-9383 Google Scholar

9. 

D. V. Vorobiev, Z. Ninkov and N. Brock, “Astronomical polarimetry with the RIT polarization imaging camera,” Publ. Astron. Soc. Pac., 130 64501 –64523 (2018). https://doi.org/10.1088/1538-3873/aab99b PASPAU 0004-6280 Google Scholar

10. 

H. Fei et al., “Calibration method for division of focal plane polarimeters,” Appl. Opt., 57 4992 –4996 (2018). https://doi.org/10.1364/AO.57.004992 APOPAI 0003-6935 Google Scholar

11. 

G. W. Forbes, “Truncation and manipulation of multivariate power series,” J. Comput. Appl. Math., 15 27 –36 (1986). https://doi.org/10.1016/0377-0427(86)90236-0 JCAMDI 0377-0427 Google Scholar

12. 

N. Zheng, N. Hagen and D. J. Brady, “Analytic-domain lens design with proximate ray tracing,” J. Opt. Soc. Am. A, 27 1791 –1802 (2010). https://doi.org/10.1364/JOSAA.27.001791 JOAOD6 0740-3232 Google Scholar

13. 

G. Marsaglia, “Ratios of normal variables,” J. Stat. Software, 16 4 –13 (2006). https://doi.org/10.18637/jss.v016.i04 Google Scholar

14. 

N. Hagen, “Statistics of normalized Stokes polarization parameters,” Appl. Opt., 57 5356 –5363 (2018). https://doi.org/10.1364/AO.57.005356 APOPAI 0003-6935 Google Scholar

15. 

N. Hagen, “Flatfield correction errors due to spectral mismatching,” Opt. Eng., 53 (12), 123107 (2014). https://doi.org/10.1117/1.OE.53.12.123107 Google Scholar

16. 

J. S. Tyo and H. Wei, “Optimizing imaging polarimeters constructed with imperfect optics,” Appl. Opt., 45 5497 –5503 (2006). https://doi.org/10.1364/AO.45.005497 APOPAI 0003-6935 Google Scholar

17. 

S. Roussel, M. Boffety and F. Goudail, “Polarimetric precision of a micropolarizer grid-based camera in the presence of additive and Poisson shot noise,” Opt. Express, 26 29968 –29982 (2018). https://doi.org/10.1364/OE.26.029968 OPEXFF 1094-4087 Google Scholar

Biographies of the authors are not available.

© 2019 Society of Photo-Optical Instrumentation Engineers (SPIE) 0091-3286/2019/$25.00 © 2019 SPIE
Nathan A. Hagen, Shuhei Shibata, and Yukitoshi Otani "Calibration and performance assessment of microgrid polarization cameras," Optical Engineering 58(8), 082408 (23 February 2019). https://doi.org/10.1117/1.OE.58.8.082408
Received: 28 November 2018; Accepted: 1 February 2019; Published: 23 February 2019
Lens.org Logo
CITATIONS
Cited by 12 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Cameras

Polarization

Calibration

Manufacturing

Polarizers

Optical engineering

Signal to noise ratio

Back to Top