Radiance is a familiar and important concept in optical engineering; it describes the radiant flux in an optical system as a function of 3D spatial position and ray direction, but only in the limit where diffraction can be ignored. If one knows the radiance throughout a system, other familiar radiometric quantities such as irradiance, radiant intensity and flux through some area can readily be computed (see Sec. 2).
Because x rays and gamma rays have very small wavelengths compared to most structures we encounter in imaging, these same definitions and properties of radiance apply with minor modifications in radiological imaging. On the other hand, diffraction often cannot be ignored at longer wavelengths, and for those cases a physical-optics counterpart of radiance, with similar conservation and propagation formulas, is available. There is even a form of radiance for the emerging field of quantum imaging. Details are given in Sec. 3.
Some basic propagation and conservation laws, discussed in Sec. 4, often allow the calculation of radiance for an arbitrary point in a system and an arbitrary ray direction if the radiance function is known just on a single plane. In imaging, these relations lead to simple ways of calculating the radiance in the image when one knows the radiance of the object, even for very complex imaging systems.
Nothing we have said so far about radiance has any obvious relation to the second term in the title, photon noise, but a common problem in optical engineering is the limited number of photons that can be collected and the resulting Poisson noise in any measurement of interest, including an image. As we shall see in detail in Secs. 5 and 6, any measurement we make in optics or radiology is ultimately described by a Poisson point process (PPP), and the complete statistical properties of this PPP are described by a radiance function. Thus the conservation and propagation rules for radiance tell us all we will ever need to know about the image statistics, at least for a single object.
Image statistics for a single object are important in assessment of image quality, but a full discussion of image quality must also account for the randomness of the objects being imaged and in some cases the randomness of the imaging system itself. This line of research is summarized in Sec. 7, where the concept of information content of a photon is introduced.
A useful general reference for this paper is Barrett and Myers, Foundations of Image Science,1 especially Chapter 10, which deals with radiance, and Chapter 11, which deals with Poisson point processes.
RADIANCE IN GEOMETRICAL OPTICS AND RADIOLOGY
Definitions and notation
As noted in the introduction, radiance is a function of spatial position and ray or flux direction. The position of a point in space is specified by a 3D vector R from some arbitrary origin, and the flux direction is specified by a unit vector ŝ. Thus radiance is denoted L(R, ŝ). Because ŝ is a unit vector, it has only two independent components, say sx and sy; the third component is given by , so that .
To define the radiance for the given point and flux direction, we construct an arbitrary plane P passing through the point. We denote the normal to this plane by the unit vector . The equation of the plane is · R = constant, so the position of the observation point within the plane is determined by the two components of R perpendicular to . We denote this 2D vector as r; if we choose coordinates so that the reference plane is perpendicular to the z axis (i.e. parallel to ẑ), then R = (x, y, z) and r = (x, y)
With this geometry, the radiance is defined as
where Φ is radiant flux (W), dω is the differential solid angle of a cone of flux directions from a point on the plane, and dA is the differential area on the reference plane P (see Fig. 1). The cos θ in (2.1) is defined as the scalar product between the flux direction ŝ as and the unit vector normal to the reference plane:
The cosine can be associated with either of the partial derivatives in (2.1), leading to two alternative definitions of radiance:
where dAproj is the projected differential area and dΩ is the projected differential solid angle. If we again choose coordinates so that the reference plane is perpendicular to the z axis, these projections are given by
The two alternative definitions in (2.3) show that radiance can be interpreted either as the flux per unit projected area per unit solid angle or as the flux per unit projected solid angle per unit actual area (on the reference plane). In both cases the word projected refers to projection onto a plane perpendicular to the flux direction through multiplication with cos θ. With either interpretation, radiance is invariant as the orientation of the reference plane is changed.
Since radiant flux is measured in watts (W), the SI units of radiance are watts per square meter per steradian, which we indicate as [L] = W/(m2 · sr).
Other radiometric quantities derived from radiance
Irradiance, radiant intensity and radiant flux can all be found by appropriate integration if the radiance is known on a reference plane.
Irradiance (power per unit area) incident on the plane P is given by
The 3D observation point R is fully specified by the plane P indicated in the superscript and the 2D vector r in that plane. For parallel to the z axis, for example, EP(r) is interpreted as the power per unit area on a plane of constant z, and r = (x, y). So long as the plane is specified, r and R determine the same point. The notation EP(r) indicates that the irradiance depends on (x, y) and on the location of the plane, z. Moreover, tilting the plane tilts the z axis and changes the irradiance; unlike radiance, the irradiance on a plane does depend on the orientation of the plane.
Radiant exitance, denoted MP(r), has the same definition as irradiance, (2.6), but it refers to flux reflected or emitted from plane P rather that incident on it. For a planar Lambertian source or reflector, where the radiance at the plane is independent of direction ŝ, it follows from (2.6) that M = πL.
Radiant intensity, used to express the directional dependence of the flux from some area A on plane P, is given by
The radiant flux (power) through some area A on plane P is given by
If we wish to specify the spectral dependence of the radiance (per unit wavelength), we define [cf. (2.3)]
We can also define the spectral radiance in terms of the frequency of the radiation, ν = c/λ:
or in terms of the photon energy ε = hν:
Flux and radiance in photon units
So far radiant flux has been in SI units of watts or joules per second, but for monochromatic radiation or for the spectral radiances defined above, we can express the flux in photons per second just by dividing the flux in joules/s by the photon energy in joules, denoted as εJ. Thus, for example, the spectral radiance per unit wavelength from (2.9) becomes a spectral photon radiance, given by
The word photon in this usage has no quantum-mechanical implications; it is just a way of changing the units of flux and spectral radiance.
RADIANCE IN PHYSICAL OPTICS AND QUANTUM OPTICS
Physical optics deals with coherence and diffraction. The basic difficulty in integrating radiometry with coherence theory and scalar diffraction theory is that scalar fields are complex, while radiometric quantities are real and nonnegative.
For quasimonochromatic light of frequency ν (wavelength λ =c/ν), Adriaan Walther2–4 defined the generalized spectral radiance (actually a spectral radiance per unit frequency) at a point R on plane P by
where θ is the usual angle between ŝ and the normal to the plane, k = 2π/λ = 2πν/cm, e(R) is a scalar electric field (e.g., one component of the electric field vector), the angle brackets denote a statistical average over an ensemble of scalar fields, and R and R′ are 3D position vectors specifying points in the plane.
The expectation in (3.1) is a familiar quantity from coherence theory, the ‘mutual intensity, and (3.1) states that (within constants) the generalized spectral radiance is the 2D Fourier transform of the mutual intensity. Thus Walther’s radiance definition makes an important connection between radiometry and coherence.
Another connection, noted by Walther in his 1968 paper, is that the integral in (3.1) is a Wigner Distribution Function (WDF), which originated in quantum mechanics but is now an important tool in phase-space optics] see the book by Testorf et al.5 and the review article by Alonso.6
By construction, the WDF is real, but it can take on negative values, which of course the radiance defined by (2.1) cannot. Nevertheless, when one uses the generalized radiance to compute irradiance, radiant intensity or flux with the integral formulas from Sec. 2, the results are nonnegative.
where ê+ and ê− are electric field operators and the angle brackets now denote a quantum-mechanical expectation. The other radiometric quantities can then be computed just as in the classical case.
An example of a quantum imaging systems that can be described radiometrically by (3.2) is shown in Fig. 2. In the optical or infrared realm, the source of entangled photons in this figure can be a process called spontaneous parametric down-conversion (SPDC), but in medical imaging with positron-emission tomography (PET) it can be the two entangled gamma-ray photons produced by electron-positron annihilation.
TRANSPORT AND CONSERVATION OF RADIANCE
Boltzmann transport equation
The Boltzmann equation is an equation for the time derivative of the spectral photon radiance Lp,ε (R, ŝ, ε, t), or LP,ε for short. This derivative has contributions from the physical processes of absorption, emission, scattering and propagation radiation, so we have
where the subscripts have the obvious meanings. The four terms in this equation are derived in Sec. 10.3 of Ref. 1, and the resulting transport equation is
where cm is the speed of light in the medium, μtot is the total attenuation coefficient (with contributions from both absorption and scattering processes), is an integral operator describing the angular and energy dependence of the scattering, and Ξp,ε describes the photon source. All of these quantities can depend in general on R, ŝ, ε and t.
For a self-luiminous volumetric source, as in fluorescent microscopy or nuclear medicine, the units of the source term are [Ξp,ε] = (photons/s) / (m3 · sr · eV). If the source emits isotropically, we can write
where f (R, ε, t) is the volumetric source we wish to image. Note that the 4π in (4.3) carries the units of sr.
For planar Lambertian sources (a common model in optics), the source is confined to a plane (say, z = 0), and the source term is written as
Now f (r, ε, t) is the planar Lambertian source we wish to image. Note that [δ(z)] = 1/m.
If the source is independent of time, or at least slowly varying, we can derive a steady-state form of the Boltzmann equation by setting dLp,ε/dt = 0; the result is
where f is the object to be imaged and (within constants) is the operator defined by the left side of (4.5).
If we know the source distribution and the characteristics of the propagation medium and/or imaging system, we can calculate the spectral photon radiance at any point outside V.
Conservation of radiance along a ray in lossless media
A simple but important example of the Boltzmann formalism is to consider a volumetric, monochromatic, time-independent source f(R) inside a medium with constant speed of light cm and no absorption, scattering or other radiation sources. For points outside the source region, (4.5) becomes
If we define ℓ = z – z′ and go back to a general vector notation, we can also write this result as
As illustrated in Fig. 3, the interpretation of this equation is that Lp,ε(R, ŝ, ε) is obtained by integrating the source distribution along a line parallel to ŝ and passing through the point R. The operator that performs this integration is a special case of the operator defined in (4.7), and [f](R, ŝ, ε) is a spectral photon radiance with the indicated arguments.
It follows immediately from (4.10) that
where R1, R2 and R3 are three different points along the same line through the source, but all three lying outside the source region (see Fig. 3).
Since this expression holds for all photon energies and all units for the flux, it is valid for the total radiance as well as the spectral photon radiance:
Noting that the three radiances in (4.12) have the same angular argument ŝ, we can call this line a ray, and (4.12) shows that radiance is constant along a ray in a homogeneous, lossless medium. Alternatively, we can define a ray as the path along which the radiance is constant, a path found by solving the Boltzmann transport equation.
Moreover, this same conclusion holds if we consider a lossless optical system, i.e., one with no absorption, scattering or reflection losses, and trace any ray through it as in Fig. 4. Again, radiance is constant along the ray, but now the ray direction depends on which portion of the ray we examine, so we have to write
so long as all three points are in media of the same refractive index (e.g., air). If the points are in homogeneous media of different refractive indices, but again there are no losses, it can be shown that L/n2 is the conserved quantity:
Conservation of radiance in a lossless paraxial imaging system
ABCD matrices are an elementary tool for analyzing paraxial optical systems. If we consider two arbitrary planes perpendicular to the optical axis in a system with rotational symmetry, the ray height x and the angle of the ray with respect to the optical axis, α, in the two planes are well known to be related by
If the system is paraxial but not rotationally symmetric, then we wirite
where s⊥ is the projection of the flux direction ŝ onto the plane (i.e., perpendicular to ), and A, B, C and D are all 2 × 2 matrices, so M is 4 × 4. We refer to matrices in this form as ABCD matrices rather than ABCD matrices.
The principle of conservation of radiance in a lossless paraxial system is most easily stated if we write the radiance as
With this notation, the radiance on an output plane is related to the radiance on a parallel input plane by
Transport equations for physical optics
A lesser-known application of ABCD matrices is to paraxial diffraction problems. The field propagation from input to output plane is described in the Fresnel approximation by7
Thus the physical-optics radiance is constant along the ray defined by geometrical optics. The same conclusion was reached by Walther (1978) by a stationary-phase approximation, valid asymptotically in the limit of zero wavelength. Friberg’s derivation does not require this limit (though it does use the Fresnel approximation), and it is valid for any state of coherence of the fields.1, 6
A fully quantum mechanical version of Fresnel diffraction theory has been given by Shapiro.9 Combining this theory with the quantum-mechanical form of the Walther generalized radiance in (3.1) leads to a transport equation for quantum radiance, at least for paraxial propagation.
RADIANCE DETECTORS FOR IMAGING SYSTEMS
All radiation detectors are sensitive to the radiance incident on them. In particular, if we exclude so-called thermal detectors, they all respond in some way to the spectral photon radiance defined in Sec. 2, but in the subsections below we distinguish four different classes of detectors, depending on what information they yield about the spectral photon radiance function.
A CCD or CMOS camera produces a pixellated image related to the optically induced electrical charge accumulated at each pixel. A first approximation is that the mean charge (averaged over many images of the same object) is proportional to mean number of incident photons with energy above the bandgap energy during the exposure time. In fact, however, the mean charge produced is influenced also by the spectral, spatial, angular and temporal distribution of the flux. If the spectral photon radiance is known in the detector plane, and if the sensor array responds linearly to the radiation incident on it, the most general expression for the mean output at pixel m has the form1
where ηm(R, ŝ, ε) specifies the quantum efficiency of the detector, τ is the exposure time and R is a general 3D vector, but the spatial integral is over the two variables needed to specify points in the detector plane.
Other locations for P may also be chosen. Various optical elements such as lenslets, pinholes, spectral filters, choppers or other optical modulators may be placed in the optical path between P and the detector array. The general form of the mean CCD/CMOS output signal in this case is
The function dm(R, ŝ, ε, t) describes the propagation of radiance through the optical elements between plane P and the detector as well as the response of the detector to the the radiance incident on it. One way to summarize (5.2) is to say each mean pixel output is a linear functional of the radiance in plane P, where the optical elements between the reference plane and the detector give us great freedom in choosing the weighting functions dm(R, ŝ, ε, t). Cell-phone cameras and DSLRs are now available with 50 MP sensors, which means that we can measure 50 million independent linear functionals of the radiance function in an arbitrary plane.
One application of (5.2), of considerable current interest, is the light-field camera, pioneered by Ren Ng10 and illustrated in Fig. 5. The microlens array makes the photosensor array sensitive to the angular dependence of the radiance and not just the irradiance, and that allows the image to be refocused on different planes after it is recorded. Plane P in (5.2) is tangent to the left face of the microlens array in Fig. 5, so the light-field camera essentially captures samples of .
Pixellated photon-counting detectors
So-called photon counting detectors have circuitry at each pixel to detect the presence of a photoelectric interaction in the pixel and record the number of interactions (“counts”) during some exposure time for later readout. What is counted, therefore, is photoelectric interactions rather than photons; even a classical electromagnetic field induces photoelectric interactions. Nevertheless, we will continue to use common parlance and refer to a photoelectric interaction as absorption of a photon.
As a practical matter, it is easier to distinguish radiation-induced photoelectric interactions from thermally induced electrons (dark current) if some form of optical gain is used. Intensified CCDs, for example, use an optical image intensifier ahead of the CCD sensor so that a single optical photon absorbed in the photocathode of the intensifier produces a burst of light and hence multiple photons absorbed in the CCD element.
Another way of incorporating gain is to consider an array of avalanche photodiodes (APDs) as the image detector. In this case an absorbed photon produces an electron-hole pair, and each charge carrier is accelerated by an applied electric field to energies greater than the bandgap energy, so it can produce additional electron-hole pairs.
For x rays or gamma rays, the gain can be provided by scintillation materials for which each absorbed high-energy photon produces a burst of low-energy optical photons that can be detected by optical sensors.
Perhaps surprisingly, (5.1) and (5.2) are still valid descriptions of the mean output at pixel m for arrays of photon-counting detector. The actual outputs, gm without the bar, are integers for photon-counting arrays, but the means are still linear functionals of the radiance so long as the event rate is low enough that there is no loss of counts from temporally overlapping events (pile-up) and no overflow of the registers that store the number of counts at each pixel.
Position-sensitive photon-counting detectors
It is also possible to build photon-counting detectors without pixels. A common configuration is shown in Fig. 6, which shows a large-area silicon photodiode with strip electrodes on the four edges. Light incident in a spot, as shown, creates signals in all four electrodes, and a simple centroid calculation provides a measure of the x-y position of the spot.
The primary application of position-sensitive detectors is to track a beam or localized spot of light, but the addition of an image intensifier makes it possible for them to respond to individual photoelectric interactions. As in Sec. 5.2, a single interaction in the photocathode of the intensifier produces a burst of light and hence measurable signals at each of the four output electrodes.
Thus the x-y coordinates of the individual interactions can be computed and stored in real time, as the events occur; no integration time is needed, and there are no detector pixels. Instead, the data consist of a list of the coordinates, possibly tagged with the time at which the event occurred. This form of data storage is referred to as list mode, in contrast to the pixellated or binned mode implicit in (5.1) and (5.2). Listmode data storage has received considerable attention in the radiology literature,11–13 and it is used routinely in PET.
A photon-absorption event in a detector can be characterized by several attributes, including the position R of the interaction, the energy ε deposited in the detector by the event, the direction ŝ of the radiant flux that produced the event and possibly the time t at which the event occurs. For the jth event (j = 1…, J), We can organize these attributes into an attribute vector with 5-7 components (2 or 3 components for Rj, depending on the thickness of the detector; one component for energy εj; two components for the unit vector ŝ, and possibly time tj). We denote this attribute vector for the jth event by Aj.
Of course we cannot directly measure the components of an attribute vector, but we can estimate them from suitable data sets. For example, if the position-sensitive detector of Fig. 6 is operated with an image intensifier, the pulse signals from the four electrodes can be used to estimate xj and yj, which are two of the 5-7 components of Aj. To get sufficient information to estimate additional components, we need to add optical elements such as lenslets or color filters in front of the detector to alter its sensitivity to the spectral photon radiance [see (5.2) and Fig. 5).
The goal of a photon-processing detector is estimate the components of each attribute vector as accurately as possible. We denote the estimated attributes with tildes. Thus, if Aj = (Rj, ŝj, εj), then .
Of course these estimates will not constitute perfect measurements of the true photon attributes, but statistical estimation theory1, 14, 15 provides us with the tools to analyze and minimize the bias and variance of the estimates with respect to the true attributes.
An efficient estimator is one that is unbiased and has the smallest possible variance, as specified by the Cramèr-Rao lower bound. It is well known from statistical estimation theory that the maximum-likelihood (ML) estimator is efficient in any problem where an efficient estimator exists, and in any case the ML estimator is asymptotically efficient as more or better data are acquired about each photon-absorption event.
For a review of ML estimation as applied to photon-processing detectors, see Barrett et al.16
To summarize what we know about how to implement optimum estimation of photon attributes, an ideal photon-processing detector must:
• Use a gain mechanism and multiple sensors (not just a single pixel) to obtain multiple non-redundant measurements for a single absorbed photon.
• Collect data from all sensors that respond to each event, at full precision; this form of data has been called super listmode.
• Use the super-listmode sensor data and specifically maximum-likelihood estimation (MLE) to estimate multiple attributes (e.g., 3D position, direction, energy, time) of each absorbed photon.
• Store all ML estimates of attributes in a list, at full precision
Figure 7 shows a conceptual diagram of a photon-processing detector capable of operating in the visible or NIR portions of the spectrum. A practical realization of a photon-processing detector for x rays, gamma rays and charged particles is the intensified Quantum Imaging Detector (iQID) developed by Brian Miller.17, 18 Computer systems for performing the MLE step in real time are discussed by Hesterman at al.19
PROPERTIES OF IMAGE DATA FROM PHOTON-PROCESSING DETECTORS
So far we have discussed the mean values of data obtained by integrating detectors and photon-counting arrays (see Secs. 5.1 and 5.2), but we have not done the same for photon-processing detectors, and we have not discussed the statistical properties of the data in any case.
List-mode data and point processes
In practice, the estimated photon attributes will be stored in listmode, where each list entry is a K-dimensional vector, with K being the number of estimated attributes per event; K = 6 in our example where 3D position, two components of a unit vector and an energy are estimated.
A mathematically equivalent representation of the same information is a random point process, a sum of delta functions defined by
For a discussion of angular delta functions, , see Sec. 2.4.7 in Barrett and Myers,1 and for a thorough treatment of random point processes, see Sec. 11.3.
Because of the independence of the photon-absorption events, the number of events J is a Poisson random variable, and u(R, ŝ, ε) is called a Poisson random process.
The mean of this point process is obtained by averaging over the statistics of each of the J vectors Aj and over J itself for a given object f that produced the data. The result is a function of A which depends on f, so we can denote it as ū(A|f), or more explicitly as ū(R, ŝ, ε|f). Technically, this function can be regarded as a vector ū(f) in the Hilbert space 𝕃2(ℝK), which is the space of square-integrable functions of K continuous variables.
Linear imaging of emissive objects
For self-luminous objects as in astronomy, nuclear medicine or fluorescence microscopy, Caucci and Barrett13 have shown that
where is a linear operator that maps 𝕃2(ℝ3) to 𝕃2(ℝK). We emphasize that is linear even when the relation between raw data (e.g., sensor signals) and estimated attributes is nonlinear. Linearity depends only on the independence of the individual photon interactions.
For comparison, all linear photon-counting imaging systems that record only the number of counts in each of M bins are described by
where now maps 𝕃2(ℝ3) to an M-dimensional Euclidean space 𝔼M. In the terminology of Barrett and Myers,1 is a continuous-to-discrete (CD) operator but is a continuous-to-continuous (CC) operator. The crucial difference between CC and CD operators is that the latter must necessarily have an infinite-dimensional null space; the system maps a vector in an infinite-dimensional Hilbert space to a finite set of numbers, so there is an infinite set of objects fnull that yield no data at all: . Because real objects almost always have null components for any CD system, the object itself cannot be recovered from the discrete data, and even simple linear functionals such as integrals of the object over voxels cannot be estimated without a possibly infinite bias. A CC operator like may have a null space, but if K ≥ 3 the null space is not demanded by dimensionality considerations, so it is possible that many of the indeterminacy problems inherent in the world of conventional digital imaging may disappear with photon processing rather than mere photon counting.
Nonlinear imaging of transmissive objects
In many situations in optics and radiology, an image is formed by illuminating a transmissive object with a known radiation source. In these cases, the object is described by the spatial and spectral distributions of its attenuation and scattering coefficients. In terms of the Boltzmann equation (4.5), the object properties are embedded in the attenuation term cmμtotLp,ε and the scattering term rather than the source term cmΞp,ε. The solution of the Boltzmann equation shows that the radiance is exponentially attenuated as it passes through the object, so the radiance on the image detector depends nonlinearly on the object.
We can still use (6.3) to describe the mean image data with a photon-counting or integrating detector, and (6.2) for a photon-processing detector, but now the operators and are nonlinear, and it is much more difficult to say anything about null functions.
Probability density functions for photon-processing data
The full probability law for a univariate Poisson random variable is determined by its mean. The same thing is true for a Poisson random process but now the mean is a function, ū(R, ŝ, ε) and the probability law is a probability density function or PDF.
The 6D PDF for the attribute estimates is obtained by normalizing the mean:
Computation of the conditional mean of the Poisson Point Process
According to (6.4), the mean of the Poisson point process tells you everything about the statistics of the process, conditional on a a specific object f, but how do we compute that mean?
The first approach is to recognize that, within constants, the mean ū(R, ŝ, ε|f) is just a blurred version of the spectral photon radiance in plane P, , with the blurring arising from the inaccuracies in estimating the photon attributes from the sensor data. Ways of accounting mathematically for this estimation blur have been discussed by Caucci and Barrett,13 so finding the mean of photon-processing data, and hence the full PDF by (6.4), boils down to computing the spectral photon radiance in plane P.
As we saw in Sec. 4.1, the spectral photon radiance is the solution of the Boltzmann transport equation, and often analytical solutions can be found; a simple example is (4.10). In many other situations in a plane of interest P can be computed from the radiance in another plane, such as a source plane, by simple invocation of the principle of conservation of radiance along a ray, as discussed in Secs. 4.2 and 4.3. As we know from Sec. 4.4, these conservation and propagation laws apply to physical-optics and quantum-optics versions of radiance also.
For emissive objects, it may be possible to approximate the linear operator with a very large matrix and find its elements by simulation of he imaging system. For transmissive objects, is nonlinear, but nevertheless the Boltzmann can be solved to get the spectral photon radiance on the detector and from there get the mean data for any of the four detection modes discussed in Sec. 5. For an example of this approach as applied to diffuse optical imaging, see Jha et al.20–22
When analytical solutions are not readily available, we can use Monte-Carlo simulation to estimate the spectral photon radiance. There is even a version of Monte-Carlo simulation, called adjoint Monte Carlo,1 that allows direct estimation of the spectral photon radiance blurred by the estimation errors.
RELATION TO IMAGE QUALITY
In much of the medical-imaging community, and increasingly in other areas of imaging, image quality is defined in terms of the performance of some observer on some clinically or scientifically relevant task. Generically, the task can be either classification or estimation. An estimation (or quantitation) task is to determine a numerical value for one or more parameters associated with the object being imaged. Examples include cardiac ejection fraction or the concentration of a dye or contrast agent in some region of interest. In a classification task, the goal is to assign the object to one of two or more classes. A simple example is tumor detection, where the classes are tumor-present and tumor-absent. The observer for a classification task can be either a human or a mathematical algorithm. The performance of a human observer can be measured by well-established psychophysical methods and analysis of the resulting ROC (receiver operating characteristic) curves. Psychophysical studies are complicated and time-consuming, however, so there is considerable interest in developing mathematical model observers that will predict human performance. Another form of mathematical observer is the ideal observer, which makes optimal use of the available data in such a way as to maximize an ROC-based figure of merit. If we know the performance of the ideal observer, we can compare it to the performance of a human and determine how well the human can extract the information contained in the data. Often the performance of the true ideal observer is difficult to compute, so we use the optimal linear observer, which we call the Hotelling observer. It is optimal among all observers constrained to perform only linear observations on the data. An important variant on the Hotelling observer is the channelized Hotelling observer (CHO), which is the Hotelling observer applied not to an image but to a reduced data set consisting of the sampled outputs of a small number of linear filters, called channels. An important example is spatial-frequency-selective channels such as those known to be operative in the human visual system.
For all of these tasks and observers, task performance is limited by the randomness in the data. So far we have discussed randomness arising from photon noise in different kinds of detectors, but there is also an inevitable statistical variability in the objects being imaged,23 and in some cases the imaging system itself introduces a form of noise.24
To account for the object variability in a classification task, we have to understand the statistics of the data conditional on a class C of objects. For any data set g, including the listmode data acquired by photon-processing detectors, it follows from elementary laws of probability that
where pr(g|f) is the PDF of the data given a particular object f, and the expectation is over all objects drawn from the class.
To visualize this expectation and its impact on classification performance, we can imagine using an accurate simulation program to generate N sample objects from class C. Denoting the nth sample object as fn (n = 1, …N), we can approximate pr(g|C) by a sample average as
For any form of photon noise, which is the subject of this paper, the PDF pr(g|fn) is fully determined by its conditional mean ḡ(fn). For a photon-counting detector with M pixels, therefore, (6.3) and (7.2) show that
where g is a vector in an M-D space with components (g1, g2, g3, …,gM). These components are statistically independent conditional on an object f, so pr(g|) is a product of M univariate Poissons, but the average over objects in (7.1) introduces statistical dependence. The accuracy of the approximation in (7.3) increases as the number of sample objects, N, increases.
To display this M-D PDF, we consider two components at a time. For example, we can plot the bivariate PDF pr(gm,gm′|C) as in Fig. 8 for different total exposure times τ. The axes on these plots represent gm and gm′ (not m and m′), and the gray level is the PDF. The leftmost panel is for the largest τ, and the rightmost panel is for the smallest τ.
The points shown by x on all three panels correspond to the values of for the N sample objects used in (7.3). For this illustration, N = 20, and the bivariate PDFs for different objects are well separated on the leftmost panel where τ is large so the standard deviation of the counts per pixel is small compared to its mean. For smaller τ, however, the standard deviation of the counts on each pixel increases relative to its mean, and the individual PDFs, pr(gm, gm′|) merge into one big blob, which for large N will be a good approximation to the class PDF, pr(gm, gm′|C). The overall variance and covariance for the class PDF depends on both the exposure time (mean number of counts per pixel for a given object) and the randomness of the objects themselves.
A similar illustration holds for photon-processing detectors, but there the discrete index m is replaced by continuous indices like (R, ŝ, ε). The exposure time τ controls the total number of recorded events J in the data list for a particular object, and again the overall variance and covariance for the class PDF depend on both the exposure time and the randomness of the objects. For mathematical details, see Caucci.13
Information content of a photon
Consider an imaging problem where the task is to decide if the object that produced a particular image g was from class C1 or class C2. To perform this task, we need some knowledge the two class PDFs, pr(g|C1) and pr(g|C2). These densities can again be approximated by expressions like (7.3) if we can accurately simulate objects from the two classes.
A depiction of the data statistics like that in Fig. 8, but for two classes, is given in Fig. 9. It is visually evident that the classification task can be performed with high accuracy at large exposure time (leftmost image in the figure) but not for shorter exposures where the photon noise dominates.
It is beyond the scope of this paper, but it is possible to make these statements much more precise by specifying the observer that will perform the tasks (e.g., human or Hotelling observer) and by adopting some measure of the task performance, such as area under an ROC curve. If one then plots this performance metric as a function of the mean number of absorbed photons, then one can define the information content per photon for this particular imaging system, task and observer. For a full mathematical exposition of this approach, with application to imaging with ionizing radiation, the reader can consult a forthcoming tutorial review article in Physics in Medicine and Biology.25
SUMMARY AND CONCLUSIONS
This paper has considered two topics that at first blush might appear to be unrelated: radiance and photon noise. In one sense the connection is elementary: All radiation detectors respond to electromagnetic radiation through photoelectric interactions, and the statistical properties of the detector output are related to the inherent Poisson randomness of these interactions. Poisson probability laws are determined by their means, and the mean output from a detector is some functional of the radiance incident on the detector. In practical engineering problems we strive to develop highly linear detectors, so the relation between radiance incident on the detector and the mean of some Poisson process is usually also linear.
In the context of imaging, the radiance incident on the detector is some other functional of the radiance incident on or emitted by the object being imaged; this functional is linear for self-luminous or reflecting objects but nonlinear for transmissive objects. In either case, the basic problem in image analysis or task-based assessment of image quality is to understand the full relationship between object properties and the Poisson statistics on the detector output. This program requires clear definitions of radiance in various circumstance; methods for transport of the radiance from object to imaging detector; mathematical expressions for the mean response of the detector to a given incident radiance; a clear statement of just what constitutes a detector output; an understanding of how the statistics of this output relate to the randomness not only of the photoelectric interactions but also of the objects being image, and finally a rigorous way of relating these statistics to image quality.
This paper has attempted to survey the current state of knowledge for all steps in this chain. We started with the usual elementary definition of radiance in terms of radiant flux, in the limit of geometrical optics, and we related this radiance to other radiometric quantities, such as irradiance and radiant intensity in the usual way. A particularly important radiometric quantity for this discussion turned out to be the spectral photon radiance, defined in (2.12).
Then we explored definitions of radiance valid for physical optics and quantum optics, and we studied the transport and conservation of radiance beginning with the Boltzmann transport equation. In particular, the crucial principle that radiance is constant along a ray for lossless optical systems was developed for both geometrical and physical optics.
We discussed four generic types of radiance detector for imaging systems. In addition to the familiar distinction between integrating and photon-counting detectors, we briefly discussed position-sensitive photon-counting detectors, which are usually used for beam-tracking applications but can be used for single-photon counting and position estimation with the addition of an image intensifier or other optical gain medium. An interesting facet of this class of detectors is that the output is a list of estimated interaction positions rather than an array of pixel values. Finally, we introduced a new class of devices, called photon-processing detectors, where multiple attributes of each photon interaction event can be estimated and stored in an expanded list.
The statistical properties of the listmode data from these last two classes of detectors were analyzed by introducing a Poisson point process, defined as a sum of delta functions in photon-attribute space; the mean of this point process is the spectral photon radiance.
We concluded with a brief summary of task-based assessment of image quality, with an emphasis on photon-counting and photon-processing detectors. This analysis led to a definition of the task-based information content of a single photon.
We have benefited significantly from discussions with Abhinav Jha and Eric Clarkson. This research was supported by the National Institutes of Health under grants R37 EB000803 and P41 EB002035.
Barrett, H. H., Hunter, W. C. J., Miller, B. W., Moore, S. K., Chen, Y., and Furenlid, L. R., , “Maximum-likelihood methods for processing signals from gamma-ray detectors,” plenary paper, IEEE Trans. Nucl. Sci. 56, 725–735 (2009). Google Scholar