Linear unmixing is a method of decomposing a mixed signature to determine the component materials that are present in sensor’s field of view, along with the abundances at which they occur. Linear unmixing assumes that energy from the materials in the field of view is mixed in a linear fashion across the spectrum of interest. Traditional unmixing methods can take advantage of adjacent pixels in the decomposition algorithm, but is not the case for point sensors. This paper explores several iterative and non-iterative methods for linear unmixing, and examines their effectiveness at identifying the individual signatures that make up simulated single pixel mixed signatures, along with their corresponding abundances. The major hurdle addressed in the proposed method is that no neighboring pixel information is available for the spectral signature of interest. Testing is performed using two collections of spectral signatures from the Johns Hopkins University Applied Physics Laboratory’s Signatures Database software (SigDB): a hand-selected small dataset of 25 distinct signatures from a larger dataset of approximately 1600 pure visible/near-infrared/short-wave-infrared (VIS/NIR/SWIR) spectra. Simulated spectra are created with three and four material mixtures randomly drawn from a dataset originating from SigDB, where the abundance of one material is swept in 10% increments from 10% to 90%with the abundances of the other materials equally divided amongst the remainder. For the smaller dataset of 25 signatures, all combinations of three or four materials are used to create simulated spectra, from which the accuracy of materials returned, as well as the correctness of the abundances, is compared to the inputs. The experiment is expanded to include the signatures from the larger dataset of almost 1600 signatures evaluated using a Monte Carlo scheme with 5000 draws of three or four materials to create the simulated mixed signatures. The spectral similarity of the inputs to the output component signatures is calculated using the spectral angle mapper. Results show that iterative methods significantly outperform the traditional methods under the given test conditions.
Camera motion is a potential problem when a video camera is used to perform dynamic displacement measurements. If
the scene camera moves at the wrong time, the apparent motion of the object under study can easily be confused with
the real motion of the object. In some cases, it is practically impossible to prevent camera motion, as for instance, when
a camera is used outdoors in windy conditions. A method to address this challenge is described that provides an
objective means to measure the displacement of an object of interest in the scene, even when the camera itself is moving
in an unpredictable fashion at the same time. The main idea is to synchronously measure the motion of the camera and
to use those data ex post facto to subtract out the apparent motion in the scene that is caused by the camera motion. The
motion of the scene camera is measured by using a reference camera that is rigidly attached to the scene camera and
oriented towards a stationary reference object. For instance, this reference object may be on the ground, which is
known to be stationary. It is necessary to calibrate the reference camera by simultaneously measuring the scene images
and the reference images at times when it is known that the scene object is stationary and the camera is moving. These
data are used to map camera movement data to apparent scene movement data in pixel space and subsequently used to
remove the camera movement from the scene measurements.
There are numerous ways to use video cameras to measure 3D dynamic spatial displacements. When the scene
geometry is unknown and the motion is unconstrained, two calibrated cameras are required. The data from both scenes
are combined to perform the measurements using well known stereoscopic techniques. There are occasions where the
measurement system can be simplified considerably while still providing a calibrated spatial measurement of a complex
dynamic scene. For instance, if the sizes of objects in the scene are known a priori, these data may be used to provide
scene specific spatial metrics to compute calibration coefficients. With this information, it is not necessary to calibrate
the camera before use, nor is it necessary to precisely know the geometry between the camera and the scene. Field-ofview
coverage and sufficient spatial and temporal resolution are the main camera requirements. Further simplification
may be made if the 3D displacements of interest are small or constrained enough to allow for an accurate 2D projection
of the spatial variables of interest. With proper camera orientation and scene marking, the apparent pixel movements
can be expressed as a linear combination of the underlying spatial variables of interest. In many cases, a single camera
may be used to perform complex 3D dynamic scene measurements. This paper will explain and illustrate a technique
for using a single uncalibrated video camera to measure the 3D displacement of the end of a constrained rigid body
subject to a perturbation.
Triboluminescent phosphors provide a method for converting kinetic signals to optical signals for particle detection. Several methods, including vapor deposition, electron beam, and spray-on were evaluated for depositing a thin translucent coating of ZnS:Mn phosphor material onto transparent substrates. The objective was to be able to optically detect impact events on the back side of the substrate while still retaining some capacity to view distant optical events. During the experiments, optical detectors within a light-tight test chamber were used to measure the optical signals generated by the coatings. The measurements resulted from optical signals that were generated by particle impacts and sample phosphorescence, along with electrical interferences between the particle sources, the ambient background, and the detectors. Signal levels and translucency measurements from various coatings are described, along with lessons learned about the coating processes, the detectors, and the limitations of the measurements.
Fiber optic sensors offer many advantages over electrical sensors for use in harsh environments. One advantage over
distributed electrical sensors is the elimination of the need to route electrical power and wiring to the sensors, which, in
general, improves safety and reduces power consumption. Another advantage is that the optical sensors are immune to
electromagnetic interference that may be caused by radio frequency signals used for communications. Another benefit
of using an optical approach for impact detectors is the implicit immunity from false detections that may otherwise be
caused by unrelated mechanical shock or vibration events. Previous studies have documented the characteristics of the
Optical Debris Impact Sensor (ODIS). With the ODIS, the impacts are inferred by detecting the brief triboluminescent
optical pulses generated by the abrupt charge separation within a phosphor that is caused by the particle impacts. The
main limitations of the ODIS are the small detection area and the limited sensitivity. This paper describes a method for
extending the ODIS to accomplish broad area detection on a surface with potentially higher sensitivity. The sensing
element is comprised of a stack of planar optical waveguides with phosphor-coated strips. The geometry of the design
ensures optical pulses are automatically captured by the waveguides and routed to a fiber optic cable that transports the
signal to a remote high-speed photodetector. Background light levels in the vicinity of the detector are filtered out by
the tailored frequency response of the photodetector.
KEYWORDS: Radiometry, Sensors, Radio optics, Solids, Signal detection, Data modeling, Electromagnetism, Optical spheres, 3D scanning, Temperature metrology
Most radiometers are directionally sensitive. Measuring optical radiation in a given environment is typically done using
a collection aperture pointing in the direction of the optical source. The collection aperture has a limited field of view,
and the collection efficiency decreases as the angle from direct line of sight increases. Thus, radiometers typically have
a limited solid angle for viewing sources. This paper describes a model of an omnidirectional, multi-channel, rotating
radiometer that provides a framework for acquiring spatially comprehensive radiometric data from an environment. By
exploiting the spatial diversity of multiple collection apertures in multiple directions, sources from all directions are
measured via three-dimensional scanning. As the radiometer rotates, data are collected that denote the radiant flux seen
by each collection aperture as a function of time. These waveforms are used to determine the directions and magnitudes
of electromagnetic sources in the environment without requiring a priori knowledge about the directions of specific
sources.
KEYWORDS: Particles, Sensors, Signal detection, Fiber optics sensors, Data acquisition, Optical sensors, Waveguides, Active optics, Optical spheres, Signal attenuation
Common sub-millimeter particle impact phenomena range from zero to thousands of joules of impact energy. The
physics of impacts are associated with a wide variety of physical phenomena, including the generation of heat, light, and
sound. Although higher energy impact events may result in vaporization of the impacted material and other easily
detectable effects, lower energy level impacts of interest may occur with little obvious physical effect. Preliminary
research with capacitative sensors provided encouraging results for detecting low-energy impacts. However, vibration
within the sensor mounting structure interfered with the detection of impact events. Research on triboluminescent
phosphors indicated that a thin layer of material could be used to form the basis of an optical sensor to detect small
particle impacts without interference from structural vibrations. A ZnS:Mn phosphor was used as the basis for
developing a triboluminescent fiber optic sensor to detect small particle impact events. Detection of impacts is
accomplished by detecting the optical pulse that is generated by the abrupt charge separation caused by the particle
impact within the phosphor. Laboratory-based experiments were performed to capture the operational characteristics of
the sensor. The data are used to study the characteristic response, sensor repeatability, and spatial homogeneity of the
detection surface. Tests were also performed to identify the energy detection boundary and to assess environmental
survivability. Results of these tests are reported in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.