PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 8058, including the Title Page, Copyright information, Table of Contents, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There are intrinsic wavelet applications, by which we mean mathematical modeling of a physical phenomenon
in which wavelet theory is the most natural quantitative means of explaining the phenomenon. This is not
the same as the invaluable use of dyadic wavelets, say, as a tool with which to zoom-in or -out with regard to
multi-scale phenomena. An example of an intrinsic wavelet application is wavelet auditory modeling (WAM).
WAM is analyzed herein, and a natural excursion, one of many possibilities, is taken from WAM to applications
of finite frames. This path includes the role of the Discrete Fourier Transform (DFT) in WAM, the emergence
of DFT frames, and their use in analyzing Σ▵ quantization, which itself is a staple in audio engineering as well
as in a host of other applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a smart surveillance strategy for handling novelty changes. Current sensors seem to keep all, redundant
or not. The Human Visual System's Hubel-Wiesel (wavelet) edge detection mechanism pays attention to changes in
movement, which naturally produce organized sparseness because a stagnant edge is not reported to the brain's
visual cortex by retinal neurons. Sparseness is defined as an ordered set of ones (movement or not) relative to zeros
that could be pseudo-orthogonal among themselves; then suited for fault tolerant storage and retrieval by means of
Associative Memory (AM). The firing is sparse at the change locations. Unlike purely random sparse masks
adopted in medical Compressive Sensing, these organized ones have an additional benefit of using the image
changes to make retrievable graphical indexes. We coined this organized sparseness as Compressive Sampling;
sensing but skipping over redundancy without altering the original image. Thus, we turn illustrate with video the
survival tactics which animals that roam the Earth use daily. They acquire nothing but the space-time changes that
are important to satisfy specific prey-predator relationships. We have noticed a similarity between the mathematical
Compressive Sensing and this biological mechanism used for survival. We have designed a hardware
implementation of the Human Visual System's Compressive Sampling scheme. To speed up further, our mixedsignal
circuit design of frame differencing is built in on-chip processing hardware. A CMOS trans-conductance
amplifier is designed here to generate a linear current output using a pair of differential input voltages from 2
photon detectors for change detection---one for the previous value and the other the subsequent value, ("write" synaptic weight by Hebbian outer products; "read" by inner product & pt. NL threshold) to localize and track the
threat targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic classification of broadband transient radio frequency (RF) signals is of particular interest in persistent
surveillance applications. Because such transients are often acquired in noisy, cluttered environments, and are
characterized by complex or unknown analytical models, feature extraction and classification can be difficult. We
propose a fast, adaptive classification approach based on non-analytical dictionaries learned from data. Conventional
representations using fixed (or analytical) orthogonal dictionaries, e.g., Short Time Fourier and Wavelet Transforms, can
be suboptimal for classification of transients, as they provide a rigid tiling of the time-frequency space, and are not
specifically designed for a particular signal class. They do not usually lead to sparse decompositions, and require
separate feature selection algorithms, creating additional computational overhead. Pursuit-type decompositions over
analytical, redundant dictionaries yield sparse representations by design, and work well for target signals in the same
function class as the dictionary atoms. The pursuit search however has a high computational cost, and the method can
perform poorly in the presence of realistic noise and clutter. Our approach builds on the image analysis work of Mairal et
al. (2008) to learn a discriminative dictionary for RF transients directly from data without relying on analytical
constraints or additional knowledge about the signal characteristics. We then use a pursuit search over this dictionary to
generate sparse classification features. We demonstrate that our learned dictionary is robust to unexpected changes in
background content and noise levels. The target classification decision is obtained in almost real-time via a parallel,
vectorized implementation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this article, we introduce the concept of fractional wavelet transform. Using a two-channel unbalanced lifting
structure it is possible to decompose a given discrete-time signal x[n] sampled with period T into two sub-signals
x1[n] and x2[n] whose average sampling periods are pT and qT, respectively. Fractions p and q are rational
numbers satisfying the condition: 1/p + 1/q = 1. The low-band sub-signal x1[n] comes from [0, π/p] band and the high-band wavelet signal x2[n] comes from (π/p, π] band of the original signal x[n]. Filters used in the liftingstructure are designed using the Lagrange interpolation formula. It is straightforward to extend the proposed
fractional wavelet transform to two or higher dimensions in a separable or non separable manner.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work describes a methodology for the recovery of anomalies and their spectral signatures from compressively
sensed multi-spectral video using Principal Component Pursuit (PCP). In video surveillance, approaches based
on PCP allow the anomaly detection in a cluttered background by modeling a sequence of video frames as a
large data matrix composed by a low-rank matrix plus a sparse matrix. The low-rank matrix corresponds to
the stationary background and the sparse matrix captures the anomalies in the foreground. The compressive
spectral video frames are attained by the use of a Coded Aperture Snapshot Spectral Imaging (CASSI) system.
The CASSI system allows the compressive measurement of spectrally rich video content by simply capturing a
sequence of 2D coded aperture video frames. This paper describes improved procedures for the reconstruction
of the video anomalies and their spectra based on the 2-D, aperture-coded, isolated anomalies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sparse representations of multidimensional data have gained more and more prominence in recent years, in
response to the need to process large and multi-dimensional data sets arising from a variety of applications in
a timely and effective manner. This is especially important in applications such as remote sensing, satellite
imagery, scientific simulations and electronic surveillance. Directional multiscale systems such as shearlets are
able to provide sparse representations thanks to their ability to approximate anisotropic features much more
efficiently than traditional multiscale representations. In this paper, we show that the shearlet approach is
essentially optimal in representing a large class of 3D containing discontinuities along surfaces. This is the first
nonadaptive approach to achieve provably optimal sparsity properties in the 3D setting.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Volumetric data acquisition and increasingly massive data storage have increased the need to develop better
analysis tools for three-dimensional data sets. These volumetric data sets can provide information beyond that
contained in standard two-dimensional images. Common strategies to deal with such data sets have been based
on sequential use of two-dimensional analysis tools. In this work, we propose using an extension of the wavelet
transform known as the shearlet transform for the purpose of edge analysis and detection in three-dimensions.
This method takes advantage of the shearlet transform's improved capability to identify edges compared to
wavelet-based approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We proposed a novel framework that allows a method optimized for white noise to be used for denoising
CT imagery. We considered low-dose x-ray CT imagery where lowering the dose of x-rays results in an increase in
quantum noise. We first denoised an image independently several times using different parameters. Then, we
selected pixels from those denoised images to form a final composite image. We produced results using blockmatching
denoising, but in principle other methods could work within this framework, as well. The proposed
method was able to better reproduce regions of low-contrast than the conventional BM3D approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The application of wavelet transforms in de-noising optical emission transient, time-domain signals generated from
microsamples introduced into an atmospheric-pressure microplasma-on-a-chip is described. The wavelet method of denoising
transient signals is compared with traditional noise-filtering methods such as Fast Fourier Transform (FFT) and
Fast Hartley Transform (FHT) signal processing. For the transient signals of interest to this work, the wavelet method
proved to be superior to both the FFT and the FHT.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Independent component analysis (ICA) for acoustic mixtures has been a challenging problem due to very complex
reverberation involved in real-world mixing environments. In an effort to overcome disadvantages of the
conventional time domain and frequency domain approaches, this paper describes filterbank-based independent
component analysis for acoustic mixtures. In this approach, input signals are split into subband signals and
decimated. A simplified network performs ICA on the decimated signals, and finally independent components
are synthesized. First, a uniform filterbank is employed in the approach for basic and simple derivation and implementation.
The uniform-filterbank-based approach achieves better separation performance than the frequency
domain approach and gives faster convergence speed with less computational complexity than the time domain
approach. Since most of natural signals have exponentially or more steeply decreasing energy as the frequency
increases, the spectral characteristics of natural signals introduce a Bark-scale filterbank which divides low frequency
region minutely and high frequency region widely. The Bark-scale-filterbank-based approach shows faster
convergence speed than the uniform-filterbank-based one because it has more whitened inputs in low frequency
subbands. It also improves separation performance as it has enough data to train adaptive parameters exactly
in high frequency subbands.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A method for target speech enhancement based on degenerate unmixing and estimating technique (DUET) has
been described. To avoid the requirements of the DUET which need to know the number of sources in advance
and to estimate the attenuation and delay parameters for all sources, the method assumes that extraction of only
one target signal is required, which is often plausible in real-world applications such as speech enhancement. The
method can efficiently recover the target speech with fast convergence by estimating the parameters for the target
source only. In addition, it does not need to know the number of sources in advance. In order to accomplish robust
speech recognition, we propose an algorithm which employs the cluster-based missing feature reconstruction
technique based on log-spectral features of enhanced speech in the process of extracting mel-frequency cepstral
coefficients (MFCCs). The algorithm estimates missing time-frequency regions by computing the signal-to-noise
ratios (SNRs) from the log-spectral features of the enhanced speech and observed noisy speech and by finding time-frequency segments which have the SNRs smaller than a threshold. The missing time-frequency regions are filled by using bounded estimation based on the log-spectral features that are considered to be reliable and on the knowledge of the log-spectral feature cluster to which the incoming target speech is assumed to belong. Then, the log-spectral features are transformed into cepstral features in the usual fashion of extracting MFCCs. Experimental results show that the proposed algorithm significantly improves recognition performance in noisy environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Analysis of bioimaging and biospectra data has received increasingly attention in recent years.
Here we will present two experimental results based on independent component analysis (ICA):
differentiation of superparamagnetic iron oxide (SPIO) nanoparticles used as contrast agents in magnetic
resonance imaging (MRI), and differentiation of mixed chemical analytes by surface-enhanced Raman
scattering (SERS). The SPIO nanoparticles have been applied extensively as contrast agent in MRI for
tracking of stem cells, targeted detection of cancer, due to its biocompatible and biodegradable features.
For differentiation of SPIO from the background signal (e.g. interface between air and tissues), the signal
voids from multiple sources makes the task very difficult. To solve this problem, we assume that the
number of sensors corresponds to the number of acquisitions with different combinations of MR
parameters, i.e., longitudinal and transverse relaxation times. For detection of chemical and biological
analytes, the SERS approach has drawn more interest because of its high sensitivity. SERS spectra of
mixed analytes were acquired at different locations of a silver nanorod array substrate. Due to the nonuniform
diffusion and adsorption of the analytes, these spectra have been successfully used to identify the
characteristic SERS spectrum of individual analytes. In both the MRI and SERS data, signal source separation (SPIO or mixed chemical analytes from background signal) was performed on a pixel by pixel basis. The ICA was performed by a spatial analysis using the fast ICA method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the past, autonomic nervous system response has often been determined through measuring Electrodermal Activity
(EDA), sometimes referred to as Skin Conductance (SC). Recent work has shown that high resolution thermal cameras
can passively and remotely obtain an analog to EDA by assessing the activation of facial eccrine skin pores. This paper
investigates a method to distinguish facial skin from non-skin portions on the face to generate a skin-only Dynamic
Mask (DM), validates the DM results, and demonstrates DM performance by removing false pore counts. Moreover,
this paper shows results from these techniques using data from 20+ subjects across two different experiments. In the
first experiment, subjects were presented with primary screening questions for which some had jeopardy. In the second
experiment, subjects experienced standard emotion-eliciting stimuli. The results from using this technique will be shown in relation to data and human perception (ground truth). This paper introduces an automatic end-to-end skin detection approach based on texture feature vectors. In doing so, the paper contributes not only a new capability of tracking facial skin in thermal imagery, but also enhances our capability to provide non-contact, remote, passive, and real-time methods for determining autonomic nervous system responses for medical and security applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A common objective with active magnetic, radio frequency, and acoustic sensors is to detect
and sense small signals in the presence of large magnitude clutter or interference signals. When the interference to signal-of-interest ratio is greater than the dynamic range of the linear system, the system response may become non-linear. In this paper an adaptive analog cancellation approach is proposed to track and cancel the interference signal. The circuit's
linear property is then recovered, enabling detection of the small magnitude signals. The concept was realized in an active RF sensor which demonstrated the effectiveness of the strategy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Standard unsupervised feature extraction methods such as PCA and ICA provide representative features and latent
variables which minimizes the data reconstruction error. These generative features may be common to all data, and may
not be optimal for classification tasks. The discriminate ICA (dICA) and discriminant NMF (dNMF) had recently been
proposed which jointly maximizes Fisher linear discriminant and Negentropy of the extracted features. Motivated by
independence among features and modified Fisher linear discriminant, the new algorithm extracts features with both
generative and discriminant powers. Then, the features are further fine-tuned by supervised learning. Experimental
results show excellent recognition performance with these features.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Artificial neural networks (ANNs) are being developed for spectral interference correction in optical emission
spectrometry using spectral simulations. The networks are being developed for inductively coupled plasma-atomic
emission spectrometry (ICP-AES) and for optical emission measurements using microplasmas. In this paper,
development of artificial neural networks for spectral interference correction will be described in some detail.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Artificial neural networks are widely used in pattern recognition for sensing systems and other areas.
In this paper, we propose to improve the performance of neural networks from the perspectives
of output encoding rules, determination of training sample sizes, training performance index and evaluation
of generalization error. We propose a new output encoding rule which significantly reduces the
training error as compared to classical output encoding methods. Moreover, we develop a new training
performance index which is closely relate to the generalization error and is a smooth function suitable for optimization by virtue of nonlinear programming. Furthermore, motivated by the crucial impact of training sample size on the generalization error and the computational complexity of training, we propose a rigorous method for determining appropriate number of training samples. Since the development of a neural network requires many cycles of training and performance evaluation, we introduce adaptive methods for estimating the generalization error. The new techniques of neural network training and evaluation have potential to improve the power of modern sensing systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper surveys the history of tip based micro/nanomanipulation systems and the contributions of the authors in
this topic. Atomic force microscope or scanning tunnerling microscope type of microscopes has been used as
nanorobotic manipulation systems since 1990. Using single or multiple tips, many mechanical, electrical, and
chemical micro/nanomanipulation applications have demonstrated. The authors contributed to teleoperated and
automated control of such systems and also developed new tip based micro/nanomanipulation methods to draw
polymer micro/nanofibers and create nanowires on substrates precisely and repeatedly.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Using carbon nanotubes (CNT), high performance infrared detectors have been developed. Since the CNTs have
extraordinary optoelectronics properties due to its unique one dimensional geometry and structure, the CNT
based infrared detectors have extremely low dark current, low noise equivalent temperature difference (NETD),
short response time, and high dynamic range. Most importantly, it can detect 3-5 um middle-wave infrared
(MWIR) at room temperature. This unique feature can significantly reduce the size and weight of a MWIR
imaging system by eliminating a cryogenic cooling system. However, there are two major difficulties that impede
the application of CNT based IR detectors for imaging systems. First, the small diameter of the CNTs results in
low fill factor. Secondly, it is difficult to fabricate large scale of detector array for high resolution focal plane due
to the limitations on the efficiency and cost of the manufacturing. In this paper, a new CNT based IR imaging
system will be presented. Integrating the CNT detectors with photonic crystal resonant cavity, the fill factor
of the CNT based IR sensor can reach as high as 0.91. Furthermore, using the compressive sensing technology,
a high resolution imaging can be achieved by CNT based IR detectors. The experimental testing results show
that the new imaging system can achieve the superb performance enabled by CNT based IR detectors, and, at
the same time, overcame its difficulties to achieve high resolution and efficient imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The fabrication of integrated nanomachinary systems can enable break-through applications in nanoelectronics,
photonics, bioengineering, and drug delivery or disease treatment. Naturally occurring nanomotors are biological
motor proteins powered by catalytic reactions, which convert the chemical energy from the environment into
mechanical energy directly. It has been demonstrated recently that using a simple catalytic reaction and an
asymmetric bimetallic nanorod, one can produce catalytic nanomotors that mimic the autonomous motions of
bionanomotors. Yet the construction of artificial nanomachines remains a major contemporary challenge due to the
lack of a flexible fabrication technique that can design the desired dynamic components. We use a design technique
called dynamic shadowing growth that allows for the fabrication of a wide range of various geometries and the
asymmetric placement of the catalyst is easily accomplished as well which is necessary for directed propulsion.
Programming nanomotor behavior is possible through geometrically-focused design and by incorporating different
materials into the nanomotor structure is a simple process as well. A propulsion mechanism based upon bubble
ejection from the catalyst surface is introduced to explain the driving force, and the comparison of this driving
mechanism with the self-electrophoresis mechanism is also studied. We have also successfully incorporated multiple
parts to form complex nanomotor assemblies which exhibit motions not observed from individual parts by using magnetic interactions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For EOIR nanotechnology sensors, we elucidated the quantum mechanical nature of the Einstein photoelectric effect in terms of a field-effect transistor (FET) made of Carbon NanoTube (CNT) semiconductors. Consequently, we discovered a surprising low-pass band gap property, as opposed to the traditional sharp band-pass band-gaps. In other words, the minimum amount of photon energy shining in the middle of CNT is necessary to excite the semiconductor CNT electrons. The conduction electron will spiral in a steady over the surface to minimize the collision recombination when travelling from the cathode end to the anode end by the asymmetric semiconductor-metal (using Pd or Al) Schottky interface effect for read out.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A smart sensor network is composed of a number of sensor nodes to extract meaningful and actionable information
to system users in a timely, practical and intuitive manner. It requires sophisticated and geospatially-distributed
infrastructures, centralized supervision, and deployment of large-scale security and surveillance networks to provide
a 24/7 and all weather security operation in heavily-populated environments as well as restricted areas. Technically,
development of sensor networks requires advanced technologies from four different areas: sensing technology,
communication network, on-board Digital Signal Processing (DSP) capability, and sensing data management.
These four key technologies have practical difficulties in various areas including communication covertness,
network discovery, control and routing, collaborative signal and information processing, tasking and querying, and
data management and security. In this paper, a brief history of sensor network development will be addressed first.
Then technology trends and impacts of the sensing networks will be reviewed. This paper also describes the
concept, design guideline and industrial standards of sensor networks which have been made viable by the
convergence of sensor technology, wireless communications, digital electronics, and sensing system management to
make all types of sensors, transducers and sensor data discoverable, accessible, manageable, and useable via the
Web.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The next generation surveillance system will equip with versatile sensor devices and information focus capable of
conducting regular and irregular surveillance and security environments worldwide. The community of the
persistent surveillance must invest the limited energy and money effectively into researching enabling technologies
such as nanotechnology, wireless networks, and micro-electromechanical systems (MEMS) to develop persistent
surveillance applications for the future. Wireless sensor networks can be used by the military for a number of
purposes such as monitoring militant activity in remote areas and force protection. Being equipped with appropriate
sensors these networks can enable detection of enemy movement, identification of enemy force and analysis of their
movement and progress. Among these sensor network technologies, covert communication is one of the challenging
tasks in the persistent surveillance because it is highly demanded to provide secured sensor nodes and linkage for
fear of deliberate sabotage. Due to the matured VLSI/DSP technologies, affordable COTS of UWB technology with
noise-like direct sequence (DS) time-domain pulses is a potential solution to support low probability of intercept and
low probability of detection (LPI/LPD) data communication and transmission. This paper will describe a number of
technical challenges in wireless persistent surveillance development include covert communication, network control
and routing, collaborating signal and information processing, and etc. The paper concludes by presenting Hermitian
Wavelets to enhance SNR in support of secured communication.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Principal Component Analysis (PCA) is an optimal method for approximating a set of vectors or images, which
was used in image processing and computer vision for a number of tasks including face and object recognition. The
computational complexity and its batch calculation nature have limited its applications. Here we discuss the two
different effective solutions to sequentially calculate the principal bases in terms of the eigenvectors with respective
eigenvalues using the covariance (or covariance estimate), which is faster in typical applications and is especially
advantageous for image sequences. This principal component basis calculation is processed with much lower delay
and allows for dynamic updating of image databases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We mathematically model the mammalian Visual System's (VS) capability of spotting objects. How can a hawk see
a tiny running rabbit from miles above ground? How could that rabbit see the approaching hawk? This predatorprey
interaction draws parallels with spotting a familiar person in a crowd. We assume that mammal eyes use
peripheral vision to perceive unexpected changes from our memory, and then use our central vision (fovea) to pay
attention. The difference between an image and our memory of that image is usually small, mathematically known
as a 'sparse representation'. The VS communicates with the brain using a finite reservoir of neurotransmittents,
which produces an on-center and thus off-surround Hubel/Wiesel Mexican hat receptive field. This is the basis of
our model. This change detection mechanism could drive our attention, allowing us to hit a curveball. If we are
about to hit a baseball, what information extracted by our HVS tells us where to swing? Physical human features
such as faces, irises, and fingerprints have been successfully used for identification (Biometrics) for decades,
recently including voice and walking style for identification from further away. Biologically, humans must use a
change detection strategy to achieve an ordered sparseness and use a sigmoid threshold for noisy measurements in
our Hetero-Associative Memory [HAM] classifier for fault tolerant recall. Human biometrics is dynamic, and
therefore involves more than just the surface, requiring a 3 dimensional measurement (i.e. Daugman/Gabor iris
features). Such a measurement can be achieved using the partial coherence of a laser's reflection from a 3-D
biometric surface, creating more degrees of freedom (d.o.f.) to meet the Army's challenge of distant Biometrics.
Thus, one might be able to increase the standoff loss of less distinguished degrees of freedom (DOF).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose the vector SMT, a new decorrelating transform suitable for performing distributed
anomaly detection in wireless sensor networks (WSN). Here, we assume that each sensor in the network
performs vector measurements, instead of a scalar ones. The proposed transform decorrelates a sequence
of pairs of vector sensor measurements, until the vectors from all sensors are completely decorrelated. We
perform simulations with a network of cameras, where each camera records an image of the monitored
environment from its particular viewpoint. Results show that the proposed transform effectively decorrelates
image measurements from the multiple cameras in the network. Because it enables joint processing of
the multiple images, our method provides significant improvements to anomaly detection accuracy when
compared to the baseline case when we process the images independently.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Law enforcement agencies need a quick decision in May 1 2010 NY Time Square for quickly spot wanted
car bomb individuals in a crowd. This goal requires real-time smart firmware and a smart search algorithm
to know how to order the faces in a geometric way. We demonstrate such a sorting problem of N facial
poses time ordering is like the TSP of N cities, NP-complete, having no exact deterministic solution. Here
we demonstrated a heuristic working solution to answer the ONR grand challenge called Empire 2010.
How could the N boxes of faces detected and cut by the efficient COST parallel color-hue-algorithm,
without time mark, at a single CPU (or even with the time marks but collected videos from multiple vintage
points) to determine automatically who speaks what, where, and when? There must be a cross platform
sensory association from our different sensors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Biomedical Wellness Award for Applying Computational Intelligence to Image Diagnosis
As a simple observation of the world, it is composed of human beings, artifacts, and natural environment. As all of their
healths are issues like expansion of healthy aging, low maintenance cost, and low energy consumption the notion of
health management can be extended to be applicable to all the entities. In this article, health management technology is
proposed as a general solution framework. Its important aspect is cyclic evolution based on causality which illustrates
conditions of target systems. The causality can be used as problem-solving knowledge, which is composed of feature
attributes extracted from sensory data and intermediate characteristics. The causality should evolve to be updated
according to sophistication of sensing and control mechanisms. It also provides the important nature of transparency to
humans and machines bidirectionally, which enhances human-machine collaboration. Besides the idea of health
management technology, the applications of human health, manufacturing, and energy consumption are also introduced
and discussed. All applications were realized by multiple sensory networking to require multivariate time series analysis.
Some experiments were conducted to investigate the performance of the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses a data analysis by YURAGI for a heart rate non-constraining monitoring system Three signals are
employed: primary signal is obtained by a mat-type sensor, which is placed between a bed and subject, the second one is
obtained by an ultrasonic vibration senor attached to bed frame, and third one is Gaussian noise. We compare the results
from the synthesized data of the first and second signals with those of first signal and the noise. We employ weighted
sum as the synthesized method. We consider Gaussian noise as YURAGI. The extraction algorithm was developed based
on fuzzy logic. The comparison was done on 10 healthy volunteers and we evaluated the accuracy for various weight
ratio. Here, we must concern the accuracy because the tiny accuracy difference causes large difference in the autonomic
nerve system assessment. As the result, the results obtained from both synthesized signals were superior to that from
mat-type sensor signal only. Thus, YURAGI analysis is useful to for detecting heart rate by mat-type sensor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It has been clarified that abdominal visceral fat accumulation is closely associated to the lifestyle disease and metabolic
syndrome. The gold standard in medical fields is visceral fat area measured by an X-ray computer tomography (CT) scan
or magnetic resonance imaging. However, their measurements are high invasive and high cost; especially a CT scan
causes X-ray exposure. They are the reasons why medical fields need an instrument for viscera fat measurement with
low invasive, ease of use, and low cost. The article proposes a simple and practical method of visceral fat estimation by
employing bioelectrical impedance analysis and causal analysis. In the method, abdominal shape and dual impedances of
abdominal surface and body total are measured to estimate a visceral fat area based on the cause-effect structure. The
structure is designed according to the nature of abdominal body composition to be fine-tuned by statistical analysis. The
experiments were conducted to investigate the proposed model. 180 subjects were hired to be measured by both a CT
scan and the proposed method. The acquired model explained the measurement principle well and the correlation
coefficient is 0.88 with the CT scan measurements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Small blood vessels may be difficult to detect in magnetic resonance angiography due to the lack of blood
flow caused by disease or injury. Our method, which uses a block-matching denoising approach to segment blood
vessels, works well in the presence of noise. We examined extended regions of an image to determine whether they
contained blood vessels by fitting a Gaussian mixture model to a region's histogram. Then, dissimilar regions were
denoised separately. This approach was beneficial in low-contrast settings. It can be used to detect higher-order
blood vessels that may be difficult to detect under normal conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Among lots of vital signals, heart-rate (HR) is an important index for diagnose human's health condition. For
instance, HR provides an early stage of cardiac disease, autonomic nerve behavior, and so forth. However,
currently, HR is measured only in medical checkups and clinical diagnosis during the rested state by using
electrocardiograph (ECG). Thus, some serious cardiac events in daily life could be lost. Therefore, a continuous
HR monitoring during 24 hours is desired. Considering the use in daily life, the monitoring should be noninvasive
and low intrusive. Thus, in this paper, an HR monitoring in sleep by using air pressure sensors is
proposed. The HR monitoring is realized by employing the causal analysis among air pressure and HR. The
causality is described by employing fuzzy logic. According to the experiment on 7 males at age 22-25 (23 on
average), the correlation coefficient against ECG is 0.73-0.97 (0.85 on average). In addition, the cause-effect
structure for HR monitoring is arranged by employing causal decomposition, and the arranged causality is
applied to HR monitoring in a setting posture. According to the additional experiment on 6 males, the correlation
coefficient is 0.66-0.86 (0.76 on average). Therefore, the proposed method is suggested to have enough accuracy
and robustness for some daily use cases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tai-chi chuan is popular worldwide especially in China. People practice tai-chi
chuan daily with faith believing that they will be rewarded with health and varieties of
other rewords. The Tai Chi Chuan Theory by Master Chang and the Tai Chi Chuan
Theory by Master Wang are translated to be the baseline of the tai-chi chuan. The
theory described in these two papers clearly shows that the tai-chi power source is the
combination of the two antigravity forces distinction by each foot. The ying, yang and
hollowed, solid discussed in the papers are the properties and body relationship of the
two antigravity forces. The antigravity forces presented inside of body are as air to the
balloon termed chi. However chi could be generated by any muscle pressing; only the
antigravity forces from feet are called nature chi that has the maximum strength of the
person. When a person is soft, as an infant the nature chi will fulfill entire body with no
time and effort. The sequence forms were designed for deploying the nature chi in
speed and power. The combination of chi and tai-chi form make tai chi chuan supreme
than other kinds of martial art. In the training process chi massages whole body many
time for a sequence form practice that stimulate all organs and could lead to cure body
diseases, lose weight, postpone aging process, and remove the aging symptoms. For
the people practicing in the park daily with proper guidance they will fulfill their wishes.
Tai chi exercise could also apply to other sports as in dancing and golfing they are discussed at the end of the paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a YURAGI-Analysis for brain imaging under the skull. In it, we employ 1.0MHz and 0.5MHz
ultrasonic waves. We consider the weighted sum of these waves and attempt to extract the skull depth and image the
sulcus under it. We add 1.0MHz and 0.5MHz, and we add the waves of 1.0MHz and Gaussian noise as the YURAGI
analysis. We visualize the sulcus and skull. First, we calculate the thickness of the skull from the each of two synthesized
waves. The thickness is determined from the surface and bottom points determined from the wave based on fuzzy
inference. The sulcus surface was extracted from B-mode images for the each of two synthesized waves. As the result
using a cow scapula as the skull and steel ditch as the human sulcus, we successfully calculated skull thickness. We
extracted the sulcus width within the error of 5.86 mm and depth within the error of 1.94 mm. As for imaging the sulcus
under the skull, the highest effectiveness of the synthesized wave is 96.30% when the weight of 0.5MHz waves is 0.60,
and the one of YURAGI-Analysis wave is 97.15% when the weight is 0.003. Thus, YURAGI-Analysis is useful to this
study.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a biometric personal authentication method based on fuzzy logic using dynamics of sole pressure
distribution while walking. The method employs a pair of right and left sole pressure data. These data are acquired by a
mat type load distribution sensor. The proposed method has two processes. First, we calculate a fuzzy degree of each
sole pressure data. In this process, we extract several gait features based on weight shift and shape of footprint. Fuzzy ifthen
rules for each registered person are introduced. In it, their parameters are statistically optimized in learning process.
Second, we combine fuzzy degrees of right and left sole. In this process, we employ five operators. The method
authenticates walking person with the combined fuzzy degree. We calculate the fuzzy degree of an interest person for all
registered persons, and identify the interest person as the registered person with the highest fuzzy degree. While, we
verify the interest person as the target person if the fuzzy degree of the interest person calculated for a target person is
higher than a threshold. In an experiment on 50 volunteers, we obtained low false rejection and false acceptance rates.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Home security in night is very important, and the system that watches a person's movements is useful in the security.
This paper describes a classification system of adult, child and the other object from distance distribution measured by an
infrared laser camera. This camera radiates near infrared waves and receives reflected ones. Then, it converts the time of
flight into distance distribution. Our method consists of 4 steps. First, we do background subtraction and noise rejection
in the distance distribution. Second, we do fuzzy clustering in the distance distribution, and form several clusters. Third,
we extract features such as the height, thickness, aspect ratio, area ratio of the cluster. Then, we make fuzzy if-then rules
from knowledge of adult, child and the other object so as to classify the cluster to one of adult, child and the other object.
Here, we made the fuzzy membership function with respect to each features. Finally, we classify the clusters to one with
the highest fuzzy degree among adult, child and the other object. In our experiment, we set up the camera in room and
tested three cases. The method successfully classified them in real time processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Communication of intent usually requires motor function, which can be limiting during military missions. Determining
a soldier's intent from brain signals rather than using muscles would have numerous applications for
tactical combat. Brain-computer interfaces (BCIs) translate brain signals into machine readable form and could
optimize a soldier's interaction with the surrounding environment. However, current BCI devices have largely
remained laboratory curiosities, because current techniques either require extended training or do not have the
requisite signal fidelity, because they are highly invasive and thus not safe or practical for use in humans, or
because they rely on equipment (such as magnetic resonance imaging scanners) that do not allow for real-time
applications and/or field deployment. The objective of our research program is to create a prototype of a system
for communication and monitoring of orientation that uses brain signals to provide, in real time, an accurate assessment
of the users intentional focus and imagined speech. We expect that our efforts will provide a prototype
of the first intuitive brain-based communication and orientation system for human use.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents the use of wavelet cores for a full reconfigurable electrocardiogram signal (ECG) acquisition
system. The system is compound by two reconfigurable devices, a FPGA and a FPAA. The FPAA is in charge of the
ECG signal acquisition, since this device is a versatile and reconfigurable analog front-end for biosignals. The FPGA is
in charge of FPAA configuration, digital signal processing and information extraction such as heart beat rate and others.
Wavelet analysis has become a powerful tool for ECG signal processing since it perfectly fits ECG signal shape. The use
of these cores has been integrated in the LabVIEW FPGA module development tool that makes possible to employ
VHDL cores within the usual LabVIEW graphical programming environment, thus freeing the designer from tedious and
time consuming design of communication interfaces. This enables rapid test and graphical representation of results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we evaluate the feature extraction technique of Recoursing Energy Efficiency on electroencephalograph
data for human emotion recognition. A protocol has been established to elicit five distinct emotions (joy, sadness,
disgust, fear, surprise, and neutral). EEG signals are collected using a 256-channel system, preprocessed using band-pass
filters and Laplacian Montage, and decomposed into five frequency bands using Discrete Wavelet Transform. The
Recoursing Energy Efficiency (REE) is calculated and applied to a Multi-Layer Perceptron network for classification.
We compare the performance of REE features with conventional energy based features.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Advances in high-throughput measurements of biological specimens necessitate the development of biologically
driven computational techniques. To understand the molecular level of many human diseases, such as cancer,
lipid quantifications have been shown to offer an excellent opportunity to reveal disease-specific regulations.
The data analysis of the cell lipidome, however, remains a challenging task and cannot be accomplished solely
based on intuitive reasoning. We have developed a method to identify a lipid correlation network which is
entirely disease-specific. A powerful method to correlate experimentally measured lipid levels across the various
samples is a Gaussian Graphical Model (GGM), which is based on partial correlation coefficients. In contrast
to regular Pearson correlations, partial correlations aim to identify only direct correlations while eliminating
indirect associations. Conventional GGM calculations on the entire dataset can, however, not provide information
on whether a correlation is truly disease-specific with respect to the disease samples and not a correlation of
control samples. Thus, we implemented a novel differential GGM approach unraveling only the disease-specific
correlations, and applied it to the lipidome of immortal Glioblastoma tumor cells. A large set of lipid species
were measured by mass spectrometry in order to evaluate lipid remodeling as a result to a combination of
perturbation of cells inducing programmed cell death, while the other perturbations served solely as biological
controls. With the differential GGM, we were able to reveal Glioblastoma-specific lipid correlations to advance
biomedical research on novel gene therapies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Although "gut feeling" is a cliché in English parlance, there are neuro-physiological basis for registration of
emotions in the gut. Control of the gastro-intestinal (GI) tract is by an integration of neuro-hormonal factors from
the local myogenic to the central nervous system. Gastric contractile activity, which is responsible for the motor
properties of the stomach, is regulated by this integrated complex. Signatures of the activity include gastric electrical
activity (GEA) and bowel sounds. GEA has two distinct components: a high-frequency spike activity or post
depolarization potential termed the electrical response activity superimposed on a lower frequency, rhythmic
depolarization termed the control activity. These signatures are measured in the clinic with contact sensors and well
understood for diagnosis of gut dysmotility. Can these signatures be measured at standoff and employed for
purposes of biometrics, malintent and wellness assessment?
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Optoacoustic Imaging (OAI), a novel hybrid imaging technology, offers high contrast, molecular specificity and
excellent resolution to overcome limitations of the current clinical modalities for detection of solid tumors. The exact
time-domain reconstruction formula produces images with excellent resolution but poor contrast. Some approximate
time-domain filtered back-projection reconstruction algorithms have also been reported to solve this problem. A wavelet
transform implementation filtering can be used to sharpen object boundaries while simultaneously preserving high
contrast of the reconstructed objects. In this paper, several algorithms, based on Back Projection (BP) techniques, have
been suggested to process OA images in conjunction with signal filtering for ultrasonic point detectors and integral
detectors. We apply these techniques first directly to a numerical generated sample image and then to the laserdigitalized
image of a tissue phantom, obtaining in both cases the best results in resolution and contrast for a waveletbased filter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper focuses on the hardware acceleration of motion compensation techniques suitable for the MPEG video
compression. A plethora of representative motion estimation search algorithms and the new perspectives are introduced.
The methods and designs described here are qualified for medical imaging area where are involved larger images. The
structure of the processing systems considered has a good fit for reconfigurable acceleration. The system is based in a
platform like FPGA working with the Nios II Microprocessor platform applying C2H acceleration. The paper shows the
results in terms of performance and resources needed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a method for detecting mispronunciations with the aim of improving Computer Assisted
Language Learning (CALL) tools used by foreign language learners. The algorithm is based on Principle Component
Analysis (PCA). It is hierarchical with each successive step refining the estimate to classify the test word
as being either mispronounced or correct. Preprocessing before detection, like normalization and time-scale
modification, is implemented to guarantee uniformity of the feature vectors input to the detection system. The
performance using various features including spectrograms and Mel-Frequency Cepstral Coefficients (MFCCs)
are compared and evaluated. Best results were obtained using MFCCs, achieving up to 99% accuracy in word
verification and 93% in native/non-native classification. Compared with Hidden Markov Models (HMMs) which
are used pervasively in recognition application, this particular approach is computational efficient and effective
when training data is limited.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new approach to optimize the parameters of a gradient-based optical flow model using a parallel genetic
algorithm (GA) is proposed. The main characteristics of the optical flow algorithm are its bio-inspiration
and robustness against contrast, static patterns and noise, besides working consistently with several optical
illusions where other algorithms fail. This model depends on many parameters which conform the number of
channels, the orientations required, the length and shape of the kernel functions used in the convolution stage,
among many more. The GA is used to find a set of parameters which improve the accuracy of the optical
flow on inputs where the ground-truth data is available. This set of parameters helps to understand which
of them are better suited for each type of inputs and can be used to estimate the parameters of the optical
flow algorithm when used with videos that share similar characteristics. The proposed implementation takes
into account the embarrassingly parallel nature of the GA and uses the OpenMP Application Programming
Interface (API) to speedup the process of estimating an optimal set of parameters. The information obtained
in this work can be used to dynamically reconfigure systems, with potential applications in robotics, medical
imaging and tracking.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the big challenges in the design of embedded systems today is how to combine design reuse and intellectual
property protection (IPP). Strong IP schemes such as hardware dongle or layout watermarking usually have a very
limited design reuse for different FPGA/ASIC design platforms. Some techniques also do not fit well with protection of
software in embedded microprocessors. Another approach to IPP that allows an easy design reuse and has low costs but
a somehow reduced security is code "obfuscation." Obfuscation is a method to hide the design concept, or program
algorithm included in the C or HDL source by using one or more transformations of the original code. Obfuscation
methods include, for instance, renaming identifiers, removing comments or formatting of the code. More sophisticated
obfuscation methods include data splitting or merging, and control flow changes. This paper shows strength and
weakness of method obfuscating C, VHDL and Verilog code.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The detection of slowly moving/stationary targets in a heavy-clutter environment is a challenging problem in a
surveillance system. Recent researches [1-3] illustrate that polarization diversity can provide a measure to detect the
symmetry of a target in inhomogeneous clutter, especially when discrimination by Doppler effects is not possible,
and detection performance could be further enhanced if the polarization of transmitted signal is optimally selected to
match the target polarimetric aspects. In this paper, we address the challenges of threat detection in inhomogeneous
clutter such as in the riverine wetland environment. Second, a local sequential polarimetric diversity algorithm
using dual (horizontal and vertical) polarizations is presented to calculate the singularity of polarization diversity for
potential threat detection. The singularity of polarization diversity is a measure to discriminate the difference (less
similarity) of targets from the neighborhood (background), which area can be decided by the size of the sliding
processing window. Next, a field test using a Vector Network Analyzer collected dual-polarized scatterings of
targets and accomplished a multiple frequency band (200 MHz - 18 GHz from UHF to Ku bands) threat
characterization and detection on the same stationary threats. Finally, we show the testing result using the local
sequential polarimetric diversity algorithm to detect potential threats in inhomogeneous environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, an image fusion technique based on weighted average of Daubechies wavelet transform (db2) coefficients
from visual face image and their corresponding thermal images have been presented. Further, a comparative study has
been conducted for dimensionality reduction based on Principal Component Analysis (PCA) and Independent
Component Analysis (ICA). Fused images thus obtained are classified using a multi-layer perceptron (MLP). For
experiments IRIS Thermal/Visual Face Database has been used. Experimental results show that the performance of ICA
architecture-I is better than the other two approaches i.e. PCA and ICA-II. The average success rate for PCA, ICA-I and
ICA-II are 91.13%, 94.44% and 89.72% respectively. However, approaches presented here achieves maximum success
rate of 100% in some cases, especially in case of varying illumination.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Vertebrates are constantly threatened by the invasion of microorganisms and have evolved systems of immunity to
eliminate infectious pathogens in the body. Initial sensing of microbial agents is mediated by the recognition of
pathogens by means of molecular structures expressed uniquely by microbes of a given type. So-called 'Toll-like
receptors' are expressed on host epithelial barrier cells play an essential role in the host defense against microbial
pathogens by inducing cell responses (e.g., proliferation, death, cytokine secretion) via activation of intracellular
signaling networks. As these networks, comprising multiple interconnecting dynamic pathways, represent highly
complex multi-variate "information processing" systems, the signaling activities particularly critical for governing the
host cell responses are poorly understood and not easily ascertained by a priori theoretical notions. We have
developed over the past half-decade a "data-driven" computational modeling approach, on a 'cue-signal-response'
combined experiment/computation paradigm, to elucidate key multi-variate signaling relationships governing the cell
responses. In an example presented here, we study how a canonical set of six kinase pathways combine to effect
microbial agent-induced apoptotic death of a macrophage cell line. One modeling technique, partial least-squares
regression, yielded the following key insights: {a} signal combinations most strongly correlated to apoptotic death are
orthogonal to those most strongly correlated with release of inflammatory cytokines; {b} the ratio of two key pathway
activities is the most powerful predictor of microbe-induced macrophage apoptotic death; {c} the most influential
time-window of this signaling activity ratio is surprisingly fast: less than one hour after microbe stimulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents two aordable low-tack system for household biomedical wellness monitoring. The rst
system, JIKIMI (pronounced caregiver in Korean), is a remote monitoring system that analyzes the behavior
patterns of elders that live alone. JIKIMI is composed of an in-house sensing system, a set of wireless sensor nodes
containing a pyroelectric infrared sensor to detect the motion of elders, an emergency button and a magnetic
sensor that detects the opening and closing of doors. The system is also equipped with a server system, which
is comprised of a database and web server. The server provides the mechanism for web-based monitoring to
caregivers. The second system, Reader of Bottle Information (ROBI), is an assistant system which advises the
contents of bottles for elders. ROBI is composed of bottles that have connected RFID tags and an advice system,
which is composed of a wireless RFID reader, a gateway and a remote database server. The RFID tags are
connected to the caps of the bottles are used in conjunction with the advice system These systems have been
in use for three years and have proven to be useful for caregivers to provide more ecient and eective care
services.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We solve the channel assignment problems (CAPs) with the main objective of minimizing the overall interference level
while meeting the channel demand requirements. We use 3 methods, i.e., (1) local search (LS) with an acceptance ratio
to re-initialize the search at a predefined threshold; (2) Simulated Annealing (SA); and (3) improve local search (ILS)
with two control parameters, namely Restart (RS) and Stop (ST) thresholds. Simulation results on benchmarking CAPs
show that these simple methods outperform other more complex heuristics on both the average and minimum cost solutions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Cells as microorganisms and within multicellular organisms make robust decisions. Knowing how these complex cells
make decisions is essential to explain, predict or mimic their behavior. The discovery of multi-layer multiple feedback
loops in the signaling pathways of these modular hybrid systems suggests their decision making is sophisticated. Hybrid
systems coordinate and integrate signals of various kinds: discrete on/off signals, continuous sensory signals, and
stochastic and continuous fluctuations to regulate chemical concentrations. Such signaling networks can form
reconfigurable networks of attractors and repellors giving them an extra level of organization that has resilient decision
making built in. Work on generic attractor and repellor networks and on the already identified feedback networks and
dynamic reconfigurable regulatory topologies in biological cells suggests that biological systems probably exploit such
dynamic capabilities. We present a simple behavior of the swimming unicellular alga Chlamydomonas that involves
interdependent discrete and continuous signals in feedback loops. We show how to rigorously verify a hybrid dynamical
model of a biological system with respect to a declarative description of a cell's behavior. The hybrid dynamical systems
we use are based on a unification of discrete structures and continuous topologies developed in prior work on
convergence spaces. They involve variables of discrete and continuous types, in the sense of type theory in mathematical
logic. A unification such as afforded by convergence spaces is necessary if one wants to take account of the affect of the
structural relationships within each type on the dynamics of the system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Approximate Nearest Neighbors (ANN) in high dimensional vector spaces is a fundamental, yet challenging
problem in many areas of computer science, including computer vision, data mining and robotics. In this work,
we investigate this problem from the perspective of compressive sensing, especially the dictionary learning aspect.
High dimensional feature vectors are seldom seen to be sparse in the feature domain; examples include, but not
limited to Scale Invariant Feature Transform (SIFT) descriptors, Histogram Of Gradients, Shape Contexts, etc.
Compressive sensing advocates that if a given vector has a dense support in a feature space, then there should
exist an alternative high dimensional subspace where the features are sparse. This idea is leveraged by dictionary
learning techniques through learning an overcomplete projection from the feature space so that the vectors are
sparse in the new space. The learned dictionary aids in refining the search for the nearest neighbors to a query
feature vector into the most likely subspace combination indexed by its non-zero active basis elements. Since
the size of the dictionary is generally very large, distinct feature vectors are most likely to have distinct non-zero
basis. Utilizing this observation, we propose a novel representation of the feature vectors as tuples of non-zero
dictionary indices, which then reduces the ANN search problem into hashing the tuples to an index table; thereby
dramatically improving the speed of the search. A drawback of this naive approach is that it is very sensitive
to feature perturbations. This can be due to two possibilities: (i) the feature vectors are corrupted by noise,
(ii) the true data vectors undergo perturbations themselves. Existing dictionary learning methods address the
first possibility. In this work we investigate the second possibility and approach it from a robust optimization
perspective. This boils down to the problem of learning a dictionary robust to feature perturbations, viz. paving
the way for a novel Robust Dictionary Learning (RDL) framework. In addition to the above model, we also
propose a novel LASSO based multi-regularization hashing algorithm which utilizes the consistency properties of
the non-zero active basis for increasing values of the regularization weights. Even though our algorithm is generic
and has wide coverage in different areas of scientific computing, the experiments in the current work are mainly
focused towards improving the speed and accuracy of ANN for SIFT descriptors, which are high-dimensional
(128D) and are one of the most widely used interest point detectors in computer vision. Preliminary results from
SIFT datasets show that our algorithm is far superior to the state-of-the-art techniques in ANN.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A universal implementation for most behavioral Biometric systems is still unknown since some behaviors aren't individual
enough for identification. Habitual behaviors which are measurable by sensors are considered 'soft' biometrics (i.e., walking
style, typing rhythm), while physical attributes (i.e., iris, fingerprint) are 'hard' biometrics. Thus, biometrics can aid in the
identification of a human not only in cyberspace but in the world we live in. Hard biometrics have proven to be a rather
successful form of identification, despite a large amount of individual signatures to keep track of. Virtually all soft biometric
strategies, however, share a common pitfall. Instead of the classical pass/fail decision based on the measurements used by hard
biometrics, a confidence threshold is imposed, increasing False Alarm and False Rejection Rates. This unreliability is a major
roadblock for large scale system integration. Common computer security requires users to log-in with a six or more digit PIN
(Personal Identification Number) to access files on the disk. Commercially available Keystroke Dynamics (KD) software can
separately calculate and keep track of the mean and variance for each time travelled between each key (air time), and the time
spent pressing each key (touch time). Despite its apparent utility, KD is not yet a robust, fault-tolerant system. We begin with a
simple question: how could a pianist quickly control so many different finger and wrist movements to play music? What
information, if any, can be gained from analyzing typing behavior over time? Biology has shown us that the separation of arm
and finger motion is due to 3 long nerves in each arm; regulating movement in different parts of the hand. In this paper we wish
to capture the underlying behavioral information of a typist through statistical memory and non-linear dynamics. Our method may reveal an inverse Compressive Sensing mapping; a unique individual signature.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.