Machine learning is emerging as an essential tool in many science and engineering domains, fueled by extraordinarily powerful computers as well as advanced instruments capable of collecting high-resolution and high-dimensional experimental data. However, using off-the-shelf machine learning methods for analyzing scientific and engineering data fails to leverage our vast, collective (albeit partial) understanding of the underlying physical phenomenon or models of sensor systems. Reconstructing physical phenomena from indirect scientific observations is at the heart of scientific measurement and discovery, and so a pervasive challenge is to develop new methodologies capable of combining such physical models with training data to yield more rapid, accurate inferences. We will explore these ideas in the context of inverse problems and data assimilation; examples include climate forecasting, uncovering material structure and properties, and medical image reconstruction. Classical approaches to such inverse problems and data assimilation approaches have relied upon insights from optimization, signal processing, and the careful exploitation of forward models. In this talk, we will see how these insights and tools can be integrated into machine learning systems to yield novel methods with significant accuracy and computational advantages over naïve applications of machine learning.
The theory behind compressive sampling pre-supposes that a given sequence of observations may be exactly represented by a linear combination of a small number of vectors. In practice, however, even small deviations from an exact signal model can result in dramatic increases in estimation error; this is the so-called basis mismatch" problem. This work provides one possible solution to this problem in the form of an iterative, biconvex search algorithm. The approach uses standard ℓ1-minimization to find the signal model coefficients followed by a maximum likelihood estimate of the signal model. The process is repeated until a convergence criteria is met. The algorithm is illustrated on harmonic signals of varying sparsity and outperforms the current state-of-the-art.
The combination of fluorescent contrast agents with microscopy is a powerful technique to obtain real time images of
tissue histology without the need for fixing, sectioning, and staining. The potential of this technology lies in the
identification of robust methods for image segmentation and quantitation, particularly in heterogeneous tissues. Our
solution is to apply sparse decomposition (SD) to monochrome images of fluorescently-stained microanatomy to
segment and quantify distinct tissue types. The clinical utility of our approach is demonstrated by imaging excised
margins in a cohort of mice after surgical resection of a sarcoma. Representative images of excised margins were used to
optimize the formulation of SD and tune parameters associated with the algorithm. Our results demonstrate that SD is a
robust solution that can advance vital fluorescence microscopy as a clinically significant technology.
This paper describes the design of a deep-UV Raman imaging spectrometer operating with an excitation wavelength
of 228 nm. The designed system will provide the ability to detect explosives (both traditional military explosives
and home-made explosives) from standoff distances of 1-10 meters with an interrogation area of 1 mm x 1 mm to
200 mm x 200 mm. This excitation wavelength provides resonant enhancement of many common explosives, no
background fluorescence, and an enhanced cross-section due to the inverse wavelength scaling of Raman scattering.
A coded-aperture spectrograph combined with compressive imaging algorithms will allow for wide-area
interrogation with fast acquisition rates. Coded-aperture spectral imaging exploits the compressibility of
hyperspectral data-cubes to greatly reduce the amount of acquired data needed to interrogate an area. The resultant
systems are able to cover wider areas much faster than traditional push-broom and tunable filter systems. The full
system design will be presented along with initial data from the instrument. Estimates for area scanning rates and
chemical sensitivity will be presented. The system components include a solid-state deep-UV laser operating at 228
nm, a spectrograph consisting of well-corrected refractive imaging optics and a reflective grating, an intensified
solar-blind CCD camera, and a high-efficiency collection optic.
This paper explores the use of Poisson sparse decomposition methods for computationally separating tumor nuclei from
normal tissue structures in photon-limited microendoscopic images. Sparse decomposition tools are a natural fit for this
application with promising preliminary results. However, there are significant the tradeoffs among different algorithms
used for Poisson sparse decomposition which are described in detail and demonstrated via simulation.
KEYWORDS: Motion models, Coded apertures, Video, Optical flow, Motion measurement, Fourier transforms, Video compression, Imaging systems, Simulation of CCA and DLA aggregates, Reconstruction algorithms
This paper describes an adaptive compressive coded aperture imaging system for video based on motion-compensated
video sparsity models. In particular, motion models based on optical flow and sparse deviations from optical flow (i.e.
salient motion) can be used to (a) predict future video frames from previous compressive measurements, (b) perform
reconstruction using efficient online convex programming techniques, and (c) adapt the coded aperture to yield higher
reconstruction fidelity in the vicinity of this salient motion.
The emerging field of compressed sensing has potentially powerful implications for the design of optical imaging devices. In particular, compressed sensing theory suggests that one can recover a scene at a higher resolution than is dictated by the pitch of the focal plane array. This rather remarkable result comes with some important caveats however, especially when practical issues associated with physical implementation are taken into account. This tutorial discusses compressed sensing in the context of optical imaging devices, emphasizing the practical hurdles related to building such devices, and offering suggestions for overcoming these hurdles. Examples and analysis specifically related to infrared imaging highlight the challenges associated with large format focal plane arrays and how these challenges can be mitigated using compressed sensing ideas.
Traditionally, optical sensors have been designed to collect the most directly interpretable and intuitive measurements possible.
However, recent advances in the fields of image reconstruction, inverse problems, and compressed sensing indicate
that substantial performance gains may be possible in many contexts via computational methods. In particular, by designing
optical sensors to deliberately collect "incoherent" measurements of a scene, we can use sophisticated computational
methods to infer more information about critical scene structure and content. In this paper, we explore the potential of
physically realizable systems for acquiring such measurements. Specifically, we describe how given a fixed size focal
plane array, compressive measurements using coded apertures combined with sophisticated optimization algorithms can
significantly increase image quality and resolution.
The observations in many applications consist of counts of discrete events, such as photons hitting a detector, which cannot
be effectively modeled using an additive bounded or Gaussian noise model, and instead require a Poisson noise model. As
a result, accurate reconstruction of a spatially or temporally distributed phenomenon (f*) from Poisson data (y) cannot be
accomplished by minimizing a conventional l2-l1 objective function. The problem addressed in this paper is the estimation
of f* from y in an inverse problem setting, where (a) the number of unknowns may potentially be larger than the number
of observations and (b) f* admits a sparse representation. The optimization formulation considered in this paper uses a
negative Poisson log-likelihood objective function with nonnegativity constraints (since Poisson intensities are naturally
nonnegative). This paper describes computational methods for solving the constrained sparse Poisson inverse problem.
In particular, the proposed approach incorporates key ideas of using quadratic separable approximations to the objective
function at each iteration and computationally efficient partition-based multiscale estimation methods.
KEYWORDS: Coded apertures, Video, Simulation of CCA and DLA aggregates, Cameras, Fourier transforms, Staring arrays, Compressed sensing, Distributed interactive simulations, Imaging systems, Video compression
Nonlinear image reconstruction based upon sparse representations of images has recently received widespread attention
with the emerging framework of compressed sensing (CS). This theory indicates that, when feasible, judicious selection
of the type of distortion induced by measurement systems may dramatically improve our ability to perform image reconstruction.
However, applying compressed sensing theory to practical imaging systems poses a key challenge: physical
constraints typically make it infeasible to actually measure many of the random projections described in the literature, and
therefore, innovative and sophisticated imaging systems must be carefully designed to effectively exploit CS theory. In
video settings, the performance of an imaging system is characterized by both pixel resolution and field of view. In this
work, we propose compressive imaging techniques for improving the performance of video imaging systems in the presence
of constraints on the focal plane array size. In particular, we describe a novel yet practical approach that combines
coded aperture imaging to enhance pixel resolution with superimposing subframes of a scene onto a single focal plane
array to increase field of view. Specifically, the proposed method superimposes coded observations and uses wavelet-based
sparsity recovery algorithms to reconstruct the original subframes. We demonstrate the effectiveness of this approach by
reconstructing with high resolution the constituent images of a video sequence.
Recent theoretical work in "compressed sensing" can be exploited to guide the design of accurate, single-snapshot, static,
high-throughput spectral imaging systems. A spectral imager provides a three-dimensional data cube in which the spatial
information of the image is complemented by spectral information about each spatial location. In this paper, compressive,
single-snapshot spectral imaging is accomplished via a novel static design consisting of a coded input aperture, a single
dispersive element and a detector. The proposed "single disperser" design described here mixes spatial and spectral information
on the detector by measuring coded projections of the spectral datacube that are induced by the coded input
aperture. The single disperser uses fewer optical elements and requires simpler optical alignment than our dual disperser
design. We discuss the prototype instrument, the reconstruction algorithm used to generate accurate estimates of the
spectral datacubes, and associated experimental results.
In this work we develop a spectral imaging system and associated reconstruction methods that have been designed
to exploit the theory of compressive sensing. Recent work in this emerging field indicates that when the
signal of interest is very sparse (i.e. zero-valued at most locations) or highly compressible in some basis, relatively
few incoherent observations are necessary to reconstruct the most significant non-zero signal components.
Conventionally, spectral imaging systems measure complete data cubes and are subject to performance limiting
tradeoffs between spectral and spatial resolution. We achieve single-shot full 3D data cube estimates by using
compressed sensing reconstruction methods to process observations collected using an innovative, real-time,
dual-disperser spectral imager. The physical system contains a transmissive coding element located between
a pair of matched dispersers, so that each pixel measurement is the coded projection of the spectrum in the
corresponding spatial location in the spectral data cube. Using a novel multiscale representation of the spectral
image data cube, we are able to accurately reconstruct 256×256×15 spectral image cubes using just 256×256
measurements.
Infrared camera systems may be made dramatically smaller by simultaneously collecting several low-resolution images with multiple narrow aperture lenses rather than collecting a single high-resolution image with one wide aperture lens. Conventional imaging systems consist of one or more optical elements that image a scene on the focal plane. The resolution depends on the wavelength of operation and the f-number of the lens system, assuming a diffraction limited operation. An image of comparable resolution may be obtained by using a multi-channel camera that collects multiple low-resolution measurements of the scene and then reconstructing a high-resolution image. The proposed infrared sensing system uses a three-by-three lenslet array with an effective focal length of 1.9mm and overall system length of 2.3mm, and we achieve image resolution comparable to a conventional single lens system having a focal length of 5.7mm and overall system length of 26mm. The high-resolution final image generated by this system is reconstructed from the noisy low-resolution images corresponding to each lenslet; this is accomplished using a computational process known as superresolution reconstruction. The novelty of our approach to the superresolution problem is the use of wavelets and related multiresolution method within a Expectation-Maximization framework to improve the accuracy and visual quality of the reconstructed image. The wavelet-based regularization reduces the appearance of artifacts while preserving key features such as edges and singularities. The processing method is very fast, making the integrated sensing and processing viable for both time-sensitive applications and massive collections of sensor outputs.
Tree-structured partitions provide a natural framework for rapid and accurate extraction of level sets of a multivariate function f from noisy data. In general, a level set S is the set on which f exceeds some critical value (e.g. S = {x : f(x) ≥ γ}). Boundaries of such sets typically constitute manifolds embedded in the high-dimensional observation space. The identification of these boundaries is an important theoretical problem with applications for digital elevation maps, medical imaging, and pattern recognition. Because set identification is intrinsically simpler than function denoising or estimation, explicit set extraction methods can achieve higher accuracy than more indirect approaches (such as extracting a set of interest from an estimate of the function). The trees underlying our method are constructed by minimizing a complexity regularized data-fitting term over a family of dyadic partitions. Using this framework, problems such as simultaneous estimation of multiple (non-intersecting) level lines of a function can be readily solved from both a theoretical and practical perspective. Our method automatically adapts to spatially varying regularity of both the boundary of the level set and the function underlying the data. Level set extraction using multiresolution trees can be implemented in near linear time and specifically aims to minimize an error metric sensitive to both the error in the location of the level set and the distance of the function from the critical level. Translation invariant "voting-over-shifts" set estimates can also be computed rapidly using an algorithm based on the undecimated wavelet transform.
The nonparametric multiscale polynomial and platelet methods presented here are powerful new tools for signal and image denoising and reconstruction. Unlike traditional wavelet-based multiscale methods, these methods are both well suited to processing Poisson or multinomial data and capable of preserving image edges. At the heart of these new methods lie multiscale signal decompositions based on polynomials in one dimension and multiscale image decompositions based on what the authors call platelets in two dimensions. Platelets are localized functions at various positions, scales and orientations that can produce highly accurate, piecewise linear approximations to images consisting of smooth regions separated by smooth boundaries. Polynomial and platelet-based maximum penalized likelihood methods for signal and image analysis are both tractable and computationally efficient. Polynomial methods offer near minimax convergence rates for broad classes of functions including Besov spaces. Upper bounds on the estimation error are derived using an information-theoretic risk bound based on squared Hellinger loss. Simulations establish the practical effectiveness of these methods in applications such as density estimation, medical imaging, and astronomy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.