PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE
Proceedings Volume 6979, including the Title Page, Copyright
information, Table of Contents, Introduction, and the
Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Subband/Wavelet filter analysis-synthesis filters are a major component in many compression algorithms. Such
compression algorithms have been applied to images, voice, and video. These algorithms have achieved high performance. Typically, the configuration for such compression algorithms involves a bank of analysis filters whose coefficients have been designed in advance to enable high quality reconstruction. The analysis system
is then followed by subband quantization and decoding on the synthesis side. Decoding is performed using a
corresponding set of synthesis filters and the subbands are merged together.
For many years, there has been interest in improving the analysis-synthesis filters in order to achieve better
coding quality. Adaptive filter banks have been explored by a number of authors where by the analysis filters
and synthesis filters coefficients are changed dynamically in response to the input. A degree of performance
improvement has been reported but this approach does require that the analysis system dynamically maintain
synchronization with the synthesis system in order to perform reconstruction.
In this paper, we explore a variant of the adaptive filter bank idea. We will refer to this approach as
fixed analysis adaptive synthesis filter banks. Unlike the adaptive filter banks proposed previously, there is no analysis synthesis
synchronization issue involved. This implies less coder complexity and more coder flexibility. Such an
approach can be compatible with existing subband wavelet encoders. The design methodology and a performance analysis are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper proposes an efficient iris recognition algorithm, obtained through the fusion of Haar Wavelet and Circular
Mellin operator. The recognition system preprocesses the captured iris image to remove the effect of holes or spot of
light lying on the pupillary region which creates problem in pupil localization. The processed image is localized by
detecting inner and outer boundaries from the pupil center using maximum value of the spectrum image. Then the
eyelids are detected by fitting a 3rd degree polynomial on the suitable edge segments and removing the region occluded
by eyelids from the normalized iris image. The features for the iris pattern are extracted using Haar Wavelet and Circular
Mellin operator. The Haar Wavelet decomposition reduces the size of feature vector while Circular Mellin operator is
used for rotation and scale invariant feature extraction. The features are compared using Hamming Distance method and
the fusion is done at decision level using Conjunction rule. The recognizer is found to be more robust with accuracy level
more than 95%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The fingerprint recognition is the most developed field in the biometrics recognition and it can apply to the various
applications. So the various studies are in progress for an enhancement of fingerprint recognition performance. In this
paper, we will study the fingerprint recognition sensor. The fingerprint becomes input using fingerprint sensor to the
fingerprint recognition system and the performance of the sensor has an effect on the fingerprint quality. Therefore, the
improvement of system performance is possible by improving the performance of sensor.
In this paper, we will study sensors which can be protected or unprotected from artificial fingerprint attack. We make
various artificial fingerprints, and test sensors to overcome artificial fingerprint attack. We will analyze results of
scanning test according to the various sensors and propose the method using the histogram of the normal image in order
to measure the performance of these sensors. The measured characteristics are the resolution, shift, gradient, contrast,
and rotation. The purpose of this paper is to propose the ways of the performance measurement which can be a criterion
to evaluate the sensor performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To address the challenges on non-cooperative long-distance human authentication, identification, and verification; we
propose an innovative scheme for developing a robust and automatic long-range biometric recognition system by
combining face recognition and iris recognition of non-cooperative individuals in 24/7 operations. The system consists
of three cameras. One is a wide field of view (WFOV) CCD video camera with InfraRed (IR) filter and powerful IR
illuminators for human scan in a wide area and from a long distance. The other two cameras are high resolution video
cameras with narrow field of view (NFOV) and IR filter & illuminators, mounted on a pan-tilt-unit (PTU) to capture the
frontal view of human face and iris respectively. The WFOV detects the person and the NFOV cameras extract details
for person identification. Once the frontal view shots are captured by the NFOV cameras, the face/iris models will be
extracted by applying the state-of-the-art face/iris recognizers. In addition, a multimodality fusion approach integrates
the face and iris recognition results to improve the overall recognition performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we describe design options when implementing sampling rate converters with FPGAs. We first
review typical designs using IIR and FFT-based systems and then show implementations of fractional sampling
rate changer ranging from Lagrange, B-spline to recently introduced C-MOMS and O-MOMS designs. Speed,
area and error performance results for the circuits designed in VHDL are provided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This is the third paper in a series that introduces a MatLab/Simulink-based design flow for FPGAs at an undergraduate
curriculum level. In the first paper presented at SPIE 2006 we analyzed the design tools, while in the second paper
presented at SPIE 2007 we reported on the appropriate topics for the lectures and labs. In this third paper we first give an
overview based on the 12-year EDA cycle on why FPGAs have now reached a level where SOPC design is possible and
why MatLab/Simulink is favored by both leaders in the FPGA field: Altera (DSP builder) and Xilinx (System
Generator). We then describe the Xilinx Blackboard educational material development (EMD) that has been used in
Spring 2007 and Spring 2008 to teach a Xilinx System Generator based course and laboratory.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we provide a simple and fast hardware implementation for a Support Vector Machine (SVM). By using the
CORDIC algorithm and implementing a 2-based exponential kernel that allows us to simplify operations, we overcome
the problems caused by too many internal multiplications found in the classification process, both while applying the
Kernel formula and later on multiplying by the weights. We show a simple example of classification with the algorithm
and analyze the classification speed and accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we address the problem of Support Vector Machine (SVM) learning. We describe an analogue
implementation for a Sequential Minimal Optimization (SMO) algorithm to simplify the hardware requisites of
the learning phase. The advantages from a full set training circuit are shown and a test is carried out on a simple case to prove its effectiveness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We establish
robust stability results for the bacterial heat stress response
by using a reduced - order model with different time-scales
under parameter perturbations and determine
conditions that ensure the existence of asymptotically stable equilibria
of the perturbed system. It is assumed that the system uncertainties
are limited by the upper bounds of their norms.
We derive a Lyapunov function for the coupled system and a
maximal upper bound for the
fast time scale associated with the unfolded proteins state.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Based on his early graduated studies in psychophysics, the author has, in recent years, applied
psychophysics for studying organic and motor senses (the two sensory systems deeply embedded inside of
human body), and tried to understand the scientific foundation of the oriental health promoting practices.
The preliminary results are promising and are discussed in detail in this paper. Psychophysics studies of
organic and motor senses may be the tool to provide the connection between Western and Eastern
medicines to form a balanced holistic medicine approach, and may help us to understand the scientific
foundation of mysterious oriental health Promoting practices that serve as alternative medicines for
promoting human wellness against illness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The complexity of gene regulatory networks described by coupled nonlinear differential equations is often an obstacle for analysis purposes. Therefore, the development of effective model reduction techniques is of paramount importance in the field of systems biology. In this paper, we apply the theory of nonlinear balanced truncation for model reduction for gene regulatory networks based only on standard matrix computations. The method is based on finding a controllability and observability function of the nonlinear system and thus obtain a balanced representation that produces singular value functions which are functions of the state. As a result, we obtain a ranked contribution of the states from an input - output perspective.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An application of dependent component analysis techniques
is reported for the detection and characterization of
small indeterminate breast lesions
in dynamic contrast-enhanced MRI.
These techniques enable the extraction of spatial and temporal
features of dynamic MRI data stemming from patients with
confirmed lesion diagnosis. By revealing regional
properties of contrast-agent uptake characterized by subtle differences
of signal amplitude and dynamics, this method provides
both a set of prototypical time-series and a corresponding set of
cluster assignment maps which further provides a
segmentation with regard to identification and regional subclassification
of pathological breast tissue lesions.
We present two different segmentation methods for the evaluation of
signal intensity time courses for the differential diagnosis of
enhancing lesions inStarting from the conventional methodology, we proceed by
introducing the
separate concepts of threshold segmentation and
dependent component analysis
and in
the last step
by combining those two concepts.
The results suggest that the
dependent component approach
has the potential to
increase the diagnostic accuracy
of MRI mammography by improving the sensitivity without reduction
of specificity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Nanotechnology is expected to provide the fundamental basis of the next two generations of
products and processes. Impacts for applications are already being felt in many fields, and there is
interest especially in the aerospace industry, where performance is a major driver of decisions for
applications. Four areas are receiving special emphasis in a program aimed at the Air Force's
strategic focus on materials. The emphasis includes adaptive coatings and surface engineering,
nanoenergetics, electromagnetic sensors, and power generation and storage. Seven universities in
Texas have initiated the CONTACT program of focused research including nine projects in the first
year, with plans for expansion in subsequent years. This paper discusses the focus, progress, and
plans for the second year and opportunities for industry input to the scope and content of the
research. A new model for the creation and guidance of research programs for industry is presented.
The new approach includes interaction with the aerospace industry and the Air Force that provides a
focus for the research. Results to date for the new method and for the research are presented. A
discussion of nanoengineering technology transition into the aerospace industry highlights the
mechanisms for enhancing the process and for dealing with intellectual property.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The detection of trace quantities of aromatic compounds is important to defense and security
applications, including the detection of CB agents, explosives, and other substances. These pose
threats to forces and the environment. This paper explores an approach to the detection and
identification of quantities as little as single molecules of explosives. It can in principle provide
instant warning.
Apertureless near-field scanning optical microscopy (ANSOM) is one of several promising
methods for obtaining spatial resolution below the diffraction limit at various wavelengths,
including in the terahertz regime. By scattering incident light off the junction between a probe
with a sub-wavelength tip and the surface of a sample, spatial resolution on the order of the tip
size can be obtained. For terahertz time-domain spectroscopy where the wavelength-limited
resolution is ~1 millimeter, this is a significant advantage.
In the case of a sufficiently small probe tip and a thin metallic substrate, plasmonic interaction
between the tip and sample provides an enhancement of the near-field in the junction. This
effect is dramatically enhanced for nanometer-scale metal layers, since surface plasmon states
from both sides of the film can contribute to the overall field enhancement.
We present preliminary results of THz plasmonic field enhancements, using a thin (500 nm) gold
film evaporated on glass. We observe an enhancement in the scattered THz wave, which we
attribute to the large density of plasmonic states extending throughout the THz range. This result
indicates a route to single-molecule spectroscopy at terahertz frequencies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
IP Protection of Electronics and Wireless Networks
This paper presents significant improvements to our previous watermarking technique for Intellectual Property
Protection (IPP) of IP cores. The technique relies on hosting the bits of a digital signature at the HDL design level using
resources included within the original system. Thus, any attack trying to change or remove the digital signature will
damage the design. The technique also includes a procedure for secure signature extraction requiring minimal
modifications to the system. The new advances refer to increasing the applicability of this watermarking technique to any
design, not only to those including look-ups, and the provision of an automatic tool for signature hosting purposes.
Synthesis results show that the application of the proposed watermarking strategy results in negligible degradation of
system performance and very low area penalties and that the use of the automated tool, in addition to easy the signature
hosting, leads to reduced area penalties.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An algorithm of dynamic watermark based on neural network is presented which is more robust against attack of false
authentication and watermark-tampered operations contrasting with one watermark embedded method. (1) Five binary
images used as watermarks are coded into a binary array. The total number of 0s and 1s is 5*N, every 0 or 1 is enlarged
fivefold by information-enlarged technique. N is the original total number of the watermarks' binary bits. (2) Choose the
seed image pixel px,y and its 3×3 vicinities pixel p x-1,y-1,px-1,y,px-1,y+1,px,y-1,px,y+1,px+1,y-1,px+1,y,px+1,y+1 as one sample space.
The px,y is used as the neural network target and the other eight pixel values are used as neural network inputs. (3) To
make the neural network learn the sample space, 5*N pixel values and their closely relevant pixel values are randomly
chosen with a password from a color BMP format image and used to train the neural network.(4) A four-layer neural
network is constructed to describe the nonlinear mapped relationship between inputs and outputs. (5) One bit from the
array is embedded by adjusting the polarity between a chosen pixel value and the output value of the model. (6) One
randomizer generates a number to ascertain the counts of watermarks for retrieving. The randomly ascertained
watermarks can be retrieved by using the restored neural network outputs value, the corresponding image pixels value,
and the restore function without knowing the original image and watermarks (The restored coded-watermarkbit=1, if
ox,y(restored)>px,y(reconstructed, else coded-watermarkbit=0). The retrieved watermarks are different when extracting each time.
The proposed technique can offer more watermarking proofs than one watermark embedded algorithm. Experimental
results show that the proposed technique is very robust against some image processing operations and JPEG lossy
compression. Therefore, the algorithm can be used to protect the copyright of one important image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An ultra-wideband (UWB) inter-radio ranging technology with measurement resolution of +/-0.5 ft and range up to
0.5 kilometer under certain FCC regulation was recently introduced. However, measurement data are extremely
erroneous due to stochastic variables in the device and multipath radio wave reflections. This paper presents fuzzy
logic tuned double tracking filters as a solution to remove misinformation in the data. The 1st tracker locates the
overall center of the data in the presence of the large sporadic noise. A fuzzy logic admits only neighborhood data
to a 2nd tracker which takes care of smaller deviation noise. The fuzzy neighborhood filter approach has been
successfully applied to clean up the UWB radio ranges. Experimental results are shown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Feature extraction methods based on the statistical analysis of the change in event
pressure levels over a period and the level of ambient pressure excitation facilitate the
development of a robust classification algorithm. The features reliably discriminates
mortar and artillery variants via acoustic signals produced during the launch events.
Utilizing acoustic sensors to exploit the sound waveform generated from the blast for the
identification of mortar and artillery variants as type A, etcetera through analysis of the
waveform. Distinct characteristics arise within the different mortar/artillery variants
because varying HE mortar payloads and related charges emphasize varying size events
at launch. The waveform holds various harmonic properties distinct to a given
mortar/artillery variant that through advanced signal processing and data mining
techniques can employed to classify a given type. The skewness and other statistical
processing techniques are used to extract the predominant components from the acoustic
signatures at ranges exceeding 3000m. Exploiting these techniques will help develop a
feature set highly independent of range, providing discrimination based on acoustic
elements of the blast wave. Highly reliable discrimination will be achieved with a feedforward
neural network classifier trained on a feature space derived from the distribution
of statistical coefficients, frequency spectrum, and higher frequency details found within
different energy bands. The processes that are described herein extend current
technologies, which emphasis acoustic sensor systems to provide such situational
awareness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Pulse Coupled Neural Networks (PCNNs) have been shown to be of value in image processing applications, especially
at identifying features of small spatial extent at low signal to noise ratio. In our use of the PCNN, every pixel in a scene
feeds a neuron in a fully connected lateral neural network. Nearest neighbor neurons contribute to the output of any
given neuron using weights that link the neuron and its neighborhood in both a linear and a non-linear fashion. The
network is pulsed, and the output of the network at each pulse is a binary mask of neurons that are active. Pulsing drives
the network to evaluate its state. The multi-dimensionality and the non-linear nature of the network make selecting
weights using trial and error a non-trivial problem. It is important that the desired features of the input are identified on
a predictable pulse, a problem that has yet to be sufficiently addressed by proponents of the PCNN. Our method to
overcome these problems is to use a Genetic Algorithm to select the set of PCNN coefficients which will identify the
pixels of interest on a predetermined pulse. This method enables PCNNs to be trained, which is a novel capability and
renders the method of use for applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Airborne LiDAR data are useful for 3D terrain visualization. Segmentation of the data set is especially important for
extracting vector data in scenes containing man-made structures. In a graph theoretic formulation, each pixel is a node in
a connected graph. The likelihood of an edge existing between two pixels is encoded as the weight between the nodes.
Segmentation becomes a graph partitioning that minimizes the weights of the cut links. We combine texture analysis,
morphological operators, and the normalized-cut graph-theoretic algorithm to segment lidar data sets. Experimental
results using collected data demonstrate the efficacy of our method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hard thresholding seems to work well for denoising signals using higher-order statistics. We statistically examined
the best values for hard thresholding and related this to the fraction of wavelet coefficients set to zero to obtain the
minimum MSE. In addition, we found that the minimum MSE obtained was less sensitive to the threshold when
implemented based on a third-order parameter rather than the noise power. Alternatively, we found that this
approach to thresholding could be implemented by setting a fixed fraction of wavelet coefficients to zero.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we introduce a canonical minimised adder graph (CMAG) representation that can easily be generated
with a computer. We show that this representation can be used to efficiently develop code generation for
MAG graphs. Several code optimizations methods are developed in the computation of the non-output fundamental
sum (NOFS) computation, which allows the computation of all graphs up to cost-5 be accomplished in
a reasonable timeframe.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Knowledge-based clustering and autonomous mental development remains a high priority research topic, among which
the learning techniques of neural networks are used to achieve optimal performance. In this paper, we present a new
framework that can automatically generate a relevance map from sensory data that can represent knowledge regarding
objects and infer new knowledge about novel objects. The proposed model is based on understating of the visual what
pathway in our brain. A stereo saliency map model can selectively decide salient object areas by additionally considering
local symmetry feature. The incremental object perception model makes clusters for the construction of an ontology map
in the color and form domains in order to perceive an arbitrary object, which is implemented by the growing fuzzy
topology adaptive resonant theory (GFTART) network. Log-polar transformed color and form features for a selected
object are used as inputs of the GFTART. The clustered information is relevant to describe specific objects, and the
proposed model can automatically infer an unknown object by using the learned information. Experimental results with
real data have demonstrated the validity of this approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Breast cancer has been one of the leading causes of cancer deaths for females in the developed countries, including
the US. While early detection of breast cancer is essential for the reduction of death rate, there may be already more
than 107 cells in a breast cancer when it can be observed by X-ray mammogram. As contrast, the passive IR spectrogram
proposed by Szu et al. was shown to be promising in detecting the breast cancer several months ahead of mammogram.
With the energy readings from two IR cameras, one middle wavelength IR (MIR, 3 - 5μm) and one long wavelength IR
(LIR, 8 - 12μm), the IR spectrogram may be computed by using the blind source separation (BSS) algorithms developed
by Szu et al., which reveals the probability of being a cancer point on the breast surface. Two important tasks are
involved in computing the IR spectrogram. One is an accurate estimate of the ground state energy in the Helmholtz free
energy, H = E-T0S. The other is a correct pair-up of the points on the MIR and LIR images for a better estimation of
IR spectrogram. To minimize the probability of making an erroneous estimate of the ground state energy inherent in the
deterministic neighborhood-based BSS algorithm, a spatiotemporal approach is proposed in this paper. It takes into
account not only the neighborhood information but also the temporal information in determining the probability of being
a cancer point. Furthermore, a new sub-pixel super-resolution registration algorithm incorporating a third energy
dimension is proposed to establish better correspondences between the points in the MIR and LIR images. Phantom
study has confirmed that sub-pixel registration can be achieved by the proposed registration method. Human subject
study further shows that the breast cancer may be detected by the proposed spatiotemporal approach via
cross-referencing the IR spectrograms computed from the multiple pairs of MIR and LIR images taken at different times.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Today robotic technology can be developed and advanced a lot from industrial robotic manipulation to the micro and
nano manipulations. We have been proposed "Nano Laboratory" consisted of the capabilities of nanofabrication,
nanoinstrumentaion and nanoassembly to make three dimensional structure, devices and systems in the nanoworld. First
the nanorobotic manipulation system is introduced to realize these capabilities inside a scanning electron microscope
(SEM) and a transmission electron microscope (TEM). Then some precursors are introduced into the working small
space in the nanoworld, so that the cutting, bending and fixing operation in the nanoworld will be realized using
nanomanipulator, electron-bean-induced deposition (EBID) and other methodologies as in the macro world. After
making the three dimensional structure, nano devices such as sensor and actuator can be fabricated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fast Parallel Processing using GPU and Applications in STAP (Space Time Adaptive Processing)
This paper reviews the DARPA MTO STAP-BOY program for both Phase I and II. The STAP-BOY program
conducts fast covariance factorization and tuning techniques for space-time adaptive process (STAP) Algorithm
Implementation on Graphics Processor unit (GPU) Architectures for Embedded Systems.
Emerging capabilities in stream and multi-core computation, along with high speed memory bandwidths in
commercial GPU architectures, are enabling breakthrough low-cost and low-power teraflop computing solutions to
DoD-embedded computing challenges. Under the DARPA MTO STAP-BOY program, SAIC and Duke University,
in cooperation with commercial graphics processor companies, have been mapping complex signal processing
algorithms to GPU architectures. Algorithms undergoing implementation include STAP applications for radar
adaptive beamforming and spin-image surface matching applications for object recognition in 3-D range-image
data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper reviews the implementation of DARPA MTO STAP-BOY program for both Phase I and II conducted
at Science Applications International Corporation (SAIC). The STAP-BOY program conducts fast covariance
factorization and tuning techniques for space-time adaptive process (STAP) Algorithm Implementation on Graphics
Processor unit (GPU) Architectures for Embedded Systems.
The first part of our presentation on the DARPA STAP-BOY program will focus on GPU implementation and
algorithm innovations for a prototype radar STAP algorithm. The STAP algorithm will be implemented on the
GPU, using stream programming (from companies such as PeakStream, ATI Technologies' CTM, and NVIDIA)
and traditional graphics APIs. This algorithm will include fast range adaptive STAP weight updates and
beamforming applications, each of which has been modified to exploit the parallel nature of graphics architectures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Accurate image registration (with subpixel accuracy) techniques are critical components for many advanced image
processing systems such as the time-differencing system and the super-resolution imaging system. In this paper,
several image registration methods are compared and evaluated. Performance of registration accuracy is evaluated
using several LWIR and CCD video imagery by two metrics: RMSE (root mean square error) and SCNR (signal-to-clutter
noise-ratio) gain (improvement of SCNR). Promising applications for iris biometric identification, super-resolution
image enhancements, heavy background clutter suppression for improving moving target detection have
been presented in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.