PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 7351, including the Title Page, Copyright information, table of Contents, Introduction (if any), and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Detecting dim targets in infrared imagery remains a challenging task. Several techniques exist for detecting bright, high
contrast targets such as CFAR detectors, edge detection, and spatial thresholding. However, these approaches often fail
for detection of targets with low contrast relative to background clutter. In this paper we exploit the transient capture
capability and directional filtering aspect of wavelets to develop a wavelet based image enhancement method. We
develop an image representation, using wavelet filtered imagery, which facilitates dim target detection. We further
process the wavelet-enhanced imagery using the Michelson visibility operator to perform nonlinear contrast
enhancement prior to target detection. We discuss the design of optimal wavelets for use in the image representation. We
investigate the effect of wavelet choice on target detection performance, and design wavelets to optimize measures of
visual information on the enhanced imagery. We present numerical results demonstrating the effectiveness of the
approach for detection of dim targets in real infrared imagery. We compare target detection performance to performance
obtained using standard techniques such as edge detection. We also compare performance to target detection performed
on imagery enhanced by optimizing visual information measures in the spatial domain. We investigate the stability of
the optimal wavelets and detection performance variation, across perspective changes, image frame sample (for frames
extracted from infrared video sequences), and image scene content types. We show that the wavelet-based approach can
usually detect the targets with fewer false-alarm regions than possible with standard approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tracking moving objects is a critical step for smart video surveillance systems. Despite the complexity increase, multiple
camera systems exhibit the undoubted advantages of covering wide areas and handling the occurrence of occlusions by
exploiting the different viewpoints. The technical problems in multiple camera systems are several: installation,
calibration, objects matching, switching, data fusion, and occlusion handling. In this paper, we address the issue of
tracking moving objects in an environment covered by multiple un-calibrated cameras with overlapping fields of view,
typical of most surveillance setups. Our main objective is to create a framework that can be used to integrate objecttracking
information from multiple video sources. Basically, the proposed technique consists of the following steps. We
first perform a single-view tracking algorithm on each camera view, and then apply a consistent object labeling
algorithm on all views. In the next step, we verify objects in each view separately for inconsistencies. Correspondent
objects are extracted through a Homography transform from one view to the other and vice versa. Having found the
correspondent objects of different views, we partition each object into homogeneous regions. In the last step, we apply
the Homography transform to find the region map of first view in the second view and vice versa. For each region (in the
main frame and mapped frame) a set of descriptors are extracted to find the best match between two views based on
region descriptors similarity. This method is able to deal with multiple objects. Track management issues such as
occlusion, appearance and disappearance of objects are resolved using information from all views. This method is
capable of tracking rigid and deformable objects and this versatility lets it to be suitable for different application
scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A modified multiframe image restoration algorithm of degraded images is described in
this paper, which is based on the method developed by V. Katkovnik. The projection gradient
algorithm based on anisotropic LPA-ICI filtering, being proposed by V. Katkovnik, could only
restore images contaminated by Gaussian noisy, also it was too complicated and time consuming.
By improving Katkovnik's cost function and applying constraints on image intensity value in
iteration, we reached a new multiframe recursive iteration restoration scheme in frequency domain.
This method is suitable for reconstruction of images degenerated by both Gaussian noise and
Poissonian noise, as well by mixed noise. Experimental results demonstrated that this method
works efficiently, and could well restore images blurred heavily by multi-noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In imaging applications the prevalent effects of atmospheric turbulence comprise image dancing and image blurring.
Suggestions from the field of image processing to compensate for these turbulence effects and restore degraded imagery
include Motion-Compensated Averaging (MCA) for image sequences. In isoplanatic conditions, such an averaged image
can be considered as a non-distorted image that has been blurred by an unknown Point Spread Function (PSF) of the
same size as the pixel motions due to the turbulence and a blind deconvolution algorithm can be employed for the final
image restoration. However, when imaging over a long horizontal path close to the ground, conditions are likely to be
anisoplanatic and image dancing will effect local image displacements between consecutive frames rather than global
shifts only. Therefore, in this paper, a locally operating variant of the MCA-procedure is proposed, utilizing Block
Matching (BM) in order to identify and re-arrange uniformly displaced image parts. For the final restoration a multistage
blind deconvolution algorithm is used and the corresponding deconvolution results are presented and evaluated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a technique is presented to alleviate ghosting artifacts in the decoded video sequences for low-bit-rate
video coding. Ghosting artifacts can be defined as the appearance of ghost like outlines of an object in a decoded video
frame. Ghosting artifacts result from the use of a prediction loop in the video codec, which is typically used to increase
the coding efficiency of the video sequence. They appear in the presence of significant frame-to-frame motion in the
video sequence, and are typically visible for several frames until they eventually die out or an intra-frame refresh occurs.
Ghosting artifacts are particularly annoying at low bit rates since the extreme loss of information tends to accentuate
their appearance. To mitigate this effect, a procedure with selective in-loop filtering based on motion vector information
is proposed. In the proposed scheme, the in-loop filter is applied only to the regions where there is motion. This is done
so as not to affect the regions that are devoid of motion, since ghosting artifacts only occur in high-motion regions. It is
shown that the proposed selective filtering method dramatically reduces ghosting artifacts in a wide variety of video
sequences with pronounced frame-to-frame motion, without degrading the motionless regions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Indoor Positioning Systems using WLANs have become very popular in recent years. These systems are
spawning a new class of applications like activity recognition, surveillance, context aware computing and location based
services. While Global Positioning System (GPS) is the natural choice for providing navigation in outdoor environment,
the urban environment places a significant challenge for positioning using GPS. The GPS signals can be significantly
attenuated, and often completely blocked, inside buildings or in urban canyons. As the performance of GPS in indoor
environments is not satisfactory, indoor positioning systems based on location fingerprinting of WLANs is being
suggested as a viable alternative. The Indoor WLAN Positioning Systems suffer from several phenomena. One of the
problems is the continual availability of access points, which directly affects the positioning accuracy. Integrity
monitoring of WLAN localization, which computes WLAN positioning with different sets of access points is proposed
as a solution for this problem. The positioning accuracy will be adequate for the sets which do not contain faulty or the
access points which are offline, while the sets with such access points will fail and they will report random and
inaccurate results. The proposed method identifies proper sets and identifies the rogue access points using prediction
trajectories. The combination of prediction and correct access point set selection provides a more accurate result. This
paper discusses about integrity monitoring method for WLAN devices and followed by how it monitors and developing
the application on mobile platforms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Traditional iris recognition algorithms can work well for the frontal iris images.
However, when the gaze of an eye changes with respect to the camera lens, many times the size,
shape, and detail of iris patterns will change as well and cannot be matched to enrolled images using
traditional methods. Additionally, the transformation of off-angle eyes to polar coordinates becomes
much more challenging and noncooperative iris algorithms will require a different approach. In this
paper, we propose a new approach for iris recognition. This new method does not require polar
transformation, affine transformation or highly accurate segmentation to perform iris recognition.
Our research results using the remote non-cooperative iris Image database show that the proposed
method works well on frontal look images and off-angle images as well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The first step in a facial recognition system is to find and extract human faces in a static image or video frame. Most face
detection methods are based on statistical models that can be trained and then used to classify faces. These methods are
effective but the main drawback is speed because a massive number of sub-windows at different image scales are
considered in the detection procedure. A robust face detection technique based on an encoded image known as an
"integral image" has been proposed by Viola and Jones. The use of an integral image helps to reduce the number of
operations to access a sub-image to a relatively small and fixed number. Additional speedup is achieved by incorporating
a cascade of simple classifiers to quickly eliminate non-face sub-windows. Even with the reduced number of accesses to
image data to extract features in Viola-Jones algorithm, the number of memory accesses is still too high to support realtime
operations for high resolution images or video frames. The proposed hardware design in this research work
employs a modular approach to represent the "integral image" for this memory-intensive application. An efficient
memory manage strategy is also proposed to aggressively utilize embedded memory modules to reduce interaction with
external memory chips. The proposed design is targeted for a low-cost FPGA prototype board for a cost-effective face
detection/recognition system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a low-cost method for providing biometric verification for applications that do not require
large database sizes. Existing portable iris recognition systems are typically self-contained and expensive. For
some applications, low cost is more important than extremely discerning matching ability. In these instances,
the proposed system could be implemented at low cost, with adequate matching performance for verification.
Additionally, the proposed system could be used in conjunction with any image based biometric identification
system. A prototype system was developed and tested on a small database, with promising preliminary results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Face recognition has been widely used to automatically identify and verify a person. In this paper, we
proposed a new approach based on orthogonal subspace projection (OSP) to identification of human faces. In
linear mixture model of the face images, the OSP faces of the training images are calculated by using
orthogonal subspace projection approach and the signal-to noise ratio maximization. And the weight
parameter of the input image is obtained to do face recognition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper introduces a new recursive sequence called the truncated P-Fibonacci sequence, its corresponding binary
code called the truncated Fibonacci p-code and a new bit-plane decomposition method using the truncated Fibonacci pcode.
In addition, a new lossless image encryption algorithm is presented that can encrypt a selected object using this
new decomposition method for privacy protection. The user has the flexibility (1) to define the object to be protected as
an object in an image or in a specific part of the image, a selected region of an image, or an entire image, (2) to utilize
any new or existing method for edge detection or segmentation to extract the selected object from an image or a specific
part/region of the image, (3) to select any new or existing method for the shuffling process. The algorithm can be used in
many different areas such as wireless networking, mobile phone services and applications in homeland security and
medical imaging. Simulation results and analysis verify that the algorithm shows good performance in object/image
encryption and can withstand plaintext attacks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a statistical footprint-based method to characterize several symmetric cryptographic primitives as they are
used in lightweight digital image encryption. In particular, using spatial-domain histogram and frequency-domain image
analysis techniques, we identify a number of metrics from the encrypted images and use them to contrast the security
performance of different cryptographic primitives. For each of the metrics, the best performing cryptographic primitive
is identified. Complementary primitives are then combined to result in a product cipher with better cryptographic
performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a novel approach to compose discrete unitary transforms that are induced by input signals
which are considered to be generators of the transforms. Properties and examples of such transforms, which we
call the discrete heap transforms are given. The transforms are fast, because of a simple form of decomposition
of their matrices, and they can be applied for signals of any length. Fast algorithms of calculation of the direct
and inverse heap transforms do not depend on the length of the processed signals. In this paper, we demonstrate
the applications of the heap transforms for transformation and reconstruction of one-dimensional signals and
two-dimensional images. The heap transforms can be used in cryptography, since the generators can be selected
in different ways to make the information invisible; these generators are keys for recovering information. Different
examples of generating and applying heap transformations over signals and images are considered.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fingerprint recognition applications as means of identity authentication that deals with accuracy and security are
becoming more acceptable in areas such as financial transactions, access to secured buildings, commercial driver license
and identity check at entry borders, to mention a few. This paper presented a new approach of using two patterns of the
same person, intelligently fused, to form a new unique pattern of the same person. The Laplacian pyramid (LP) level 7
image fusion approach and the logical "OR" and logical "AND" operators for the decision fusion approach were tested
with respect to their performance in accuracy, security and processing speed of the recognition system. The concept of
receiver operator characteristic (ROC) curve to indicate any improvement in accuracy and security of the process was
used.
Finally, an overall comparison and analysis of performance between traditional systems that used a single pattern
and our proposed system that used two fused fingerprint patterns in the biometric system was presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Edge detection is an important preprocessing task which has been used extensively in image processing. As many
applications heavily rely on edge detection, effective and objective edge detection evaluation is crucial. Objective edge
map evaluation measures are an important means of assessing the performance of edge detectors under various
circumstances and in determining the most suitable edge detector or edge detector parameters. Quantifiable criteria for
objective edge map evaluation are established relative to a ground truth, and the weaknesses and limitations in the Pratt's
Figure of Merit (FOM), the objective reference-based edge map evaluation standard, are discussed. Based on the
established criteria, a new reference-based measure for objective edge map evaluation is presented. Experimental results
using synthetic images and their ground truths show that the new measure for objective edge map evaluation
outperforms Pratt's FOM visually as it takes into account more features in its evaluation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is presented a robust three dimensional scheme using fuzzy and directional techniques in denoising video color images
contaminated by impulsive random noise. This scheme estimates a noise and movement level in local area, detecting
edges and fine details in an image video sequence. The proposed approach cares the chromaticity properties in
multidimensional and multichannel images. The algorithm was specially designed to reduce computational charge, and
its performance is quantified using objective criteria, such as Pick Signal Noise Relation, Mean Absolute Error and
Normalized Color Difference, as well visual subjective views. Novel filter shows superiority rendering against other well
known algorithms found in the literature. Real-time analysis is realized on Digital Signal Processor to outperform
processing capability. The DSP was designed by Texas Instruments for multichannel processing in the multitask process,
and permits to improve the performance of several tasks, and at the same time enhancing processing time and reducing
computational charge in such a dedicated hardware.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent advances in biometric technology have pushed towards more robust and reliable systems. We aim to build
systems that have low recognition errors and are less affected by variation in recording conditions. Recognition errors are
often attributed to the usage of low quality biometric samples. Hence, there is a need to develop new intelligent
techniques and strategies to automatically measure/quantify the quality of biometric image samples and if necessary
restore image quality according to the need of the intended application. In this paper, we present no-reference image
quality measures in the spatial domain that have impact on face recognition. The first is called symmetrical adaptive
local quality index (SALQI) and the second is called middle halve (MH). Also, an adaptive strategy has been developed
to select the best way to restore the image quality, called symmetrical adaptive histogram equalization (SAHE). The
main benefits of using quality measures for adaptive strategy are: (1) avoidance of excessive unnecessary enhancement
procedures that may cause undesired artifacts, and (2) reduced computational complexity which is essential for real time
applications. We test the success of the proposed measures and adaptive approach for a wavelet-based face recognition
system that uses the nearest neighborhood classifier. We shall demonstrate noticeable improvements in the performance
of adaptive face recognition system over the corresponding non-adaptive scheme.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a novel algorithm for direct image registration, based on a generic
description of the geometric transformation. The direct image registration algorithm
consists of minimizing the intensity discrepancy between images. We propose the
Gauss - Newton algorithm for the solution of this minimization problem. The method
solves the optical flow equation iteratively to reduce the cost function. Registration
was successfully performed on images taken from both aerial and ground platforms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Privacy and security are vital concerns for practical biometric systems. The concept of cancelable or revocable
biometrics has been proposed as a solution for biometric template security. Revocable biometric means that biometric
templates are no longer fixed over time and could be revoked in the same way as lost or stolen credit cards are. In this
paper, we describe a novel and an efficient approach to biometric template protection that meets the revocability
property. This scheme can be incorporated into any biometric verification scheme while maintaining, if not improving,
the accuracy of the original biometric system. However, we shall demonstrate the result of applying such transforms on
face biometric templates and compare the efficiency of our approach with that of the well-known random projection
techniques. We shall also present the results of experimental work on recognition accuracy before and after applying the
proposed transform on feature vectors that are generated by wavelet transforms. These results are based on experiments
conducted on a number of well-known face image databases, e.g. Yale and ORL databases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the basic challenges to robust iris recognition is iris segmentation. This paper proposes the
use of an artificial neural network and a feature saliency algorithm to better localize boundary pixels of the iris.
No circular boundary assumption is made. A neural network is used to near-optimally combine current iris
segmentation methods to more accurate localize the iris boundary. A feature saliency technique is performed to
determine which features contain the greatest discriminatory information. Both visual inspection and
automated testing showed greater than 98 percent accuracy in determining which pixels in an image of the eye
were iris pixels when compared to human determined boundaries.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The H.264 video compression standard, aka MPEG 4 Part 10 aka Advanced Video Coding (AVC) allows new flexibility
in the use of video in the battlefield. This standard necessitates encoder chips to effectively utilize the increased
capabilities. Such chips are designed to cover the full range of the standard with designers of individual products given
the capability of selecting the parameters that differentiate a broadcast system from a video conferencing system. The
SmartCapture commercial product and the Universal Video Stick (UVS) military versions are about the size of a thumb
drive with analog video input and USB (Universal Serial Bus) output and allow the user to select the parameters of
imaging to the. Thereby, allowing the user to select video bandwidth (and video quality) using four dimensions of
quality, on the fly, without stopping video transmission. The four dimensions are: 1) spatial, change from 720 pixel x
480 pixel to 320 pixel x 360 pixel to 160 pixel x 180 pixel, 2) temporal, change from 30 frames/ sec to 5 frames/sec, 3)
transform quality with a 5 to 1 range, 4) and Group of Pictures (GOP) that affects noise immunity. The host processor
simply wraps the H.264 network abstraction layer packets into the appropriate network packets. We also discuss the
recently adopted scalable amendment to H.264 that will allow limit RAVC at any point in the communication chain by
throwing away preselected packets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Nowadays, Wireless and mobile communications technologies are the most important areas, which are rapidly expanding
either in horizontal or vertical directions. WiMAX is trying to compete with WiFi in coverage and data rate, while the
inexpensive WiFi still very popular in both personal and business use. Efficient bandwidth usage, Multi-Standard
convergence and Wireless Mesh Networks (WMN) are the main vertical tends in the wireless world. WiMAX-WiFi
convergence as an ideal technology that provides the best of both worlds: WiMAX new features and the low cost of the
WiFi. In order to create a heterogeneous network environment, differences between the two technologies have been
investigated and resolved. In the Multi-Carrier WiMAX-WiFi Convergence, the mismatch between the fixed WiMAXOFDM
(Nfft=256) and the WiFi-OFDM (Nfft=64) has been confirmed as a physical layer issue that will never be solved as
MAC layer problem; therefore the current proposal is how to build what we called the "Convergence-Bridge". This
bridge is like an extra thin layer, which is responsible for harmonizing the mismatch. For the WiFi-OFDM physical
layer, the paper has selected the IEEE 802.11n OFDM standard while it is being developed. The proposal does not
suggest changing the standard itself but modifying some functions to be configurable. The IEEE 802.11 standard has
fixed the configurations for WiFi mode only, while our proposal is to set up these functions for WiFi and WiMAX
modes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this first part of the latest latency-information theory (LIT) and applications paper series powerful and fast
'knowledge-unaided' power-centroid (F-KUPC) radar is revealed. More specifically, it is found that for real-world
airborne moving target indicator radar subjected to severely taxing environmental conditions F-KUPC radar
approximates the signal to interference plus noise ratio (SINR) radar performance derived with more complex
knowledge-aided power-centroid (KAPC) radar. KAPC radar was discovered earlier as part of DARPA's 2001-2005
knowledge-aided sensor signal processing expert reasoning (KASSPER) Program and outperforms standard priorknowledge
radar schemes by several orders of magnitude in both the compression of sourced intelligence-space of priorknowledge,
in the form of SAR imagery, and the compression of processing intelligence-time of the associated clutter
covariance processor, while also yielding an average SINR radar performance that is approximately 1dB away from the
optimum. In this paper, it is shown that the average SINR performance of significantly simpler F-KUPC radar emulates
that of KAPC radar and, like KAPC radar, outperforms a conventional knowledge-unaided sample covariance matrix
inverse radar algorithm by several dBs. The matlab simulation programs that were used to derive these results will
become available in the author's Web site.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Since its introduction more than six decades ago by Claude E. Shannon information theory has guided with two
performance bounds, namely source-entropy H and channel capacity C, the design of sourced intelligence-space
compressors for communication systems, where the units of intelligence-space are 'mathematical' binary digit (bit) units
of a passing of time uncertainty nature. Recently, motivated by both a real-world radar problem treated in the first part of
the present paper series, and previous uncertainty/certainty duality studies of digital-communication and quantizedcontrol
problems by the author, information theory was discovered to have a 'certainty' time-dual that was named
latency theory. Latency theory guides with two performance bounds, i.e. processor-ectropy K and sensor consciousness
F the design of processing intelligence-time compressors for recognition systems, where the units of intelligence-time
are 'mathematical' binary operator (bor) units of a configuration of space certainty nature. Furthermore, these two
theories have been unified to form a mathematical latency-information theory (M-LIT) for the guidance of intelligence
system designs, which has been successfully applied to real-world radar. Also recently, M-LIT has been found to have a
physical LIT (P-LIT) dual that guides life system designs. This novel physical theory addresses the design of motion
life-time and retention life-space compressors for physical signals and also has four performance bounds. Two of these
bounds are mover-ectropy A and channel-stay T for the design of motion life-time compressors for communication
systems. An example of a motion life-time compressor is a laser system, inclusive of a network router for a certainty, or
multi-path life-time channel. The other two bounds are retainer-entropy N and sensor scope I for the design of retention
life-space compressors for recognition systems. An example of a retention life-space compressor is a silicon
semiconductor crystal, inclusive of a leadless chip carrier for an uncertainty, or noisy life-space sensor. The eight
performance bounds of our guidance theory for intelligence and life system designs will be illustrated with practical
examples. Moreover, a four quadrants (quadrants I and III for the two physical theories and quadrants II and IV for the
two mathematical ones) LIT revolution is advanced that highlights both the discovered dualities and the fundamental
properties of signal compressors leading to a unifying communication embedded recognition (CER) system architecture.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Iris recognition algorithms depend on image processing techniques for proper segmentation of the iris. In the Ridge
Energy Direction (RED) iris recognition algorithm, the initial step in the segmentation process searches for the pupil by
thresholding and using binary morphology functions to rectify artifacts obfuscating the pupil. These functions take
substantial processing time in software on the order of a few hundred million operations. Alternatively, a hardware
version of the binary morphology functions is implemented to assist in the segmentation process. The hardware binary
morphology functions have negligible hardware footprint and power consumption while achieving speed up of 200 times
compared to the original software functions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a steganographic scheme utilizing within and/or after fractal encoding procedures on images for data
security. Fractal generation exploits the concepts of iterated function systems (IFS) consisting of a collection of
contractive transformations. Fractal images make use of partitioned iterated function systems (PIFS) to determine self
similarity within images along with approximating the original uncompressed image. The transformed coefficients are
stored in a fractal code table in order to decode the image as an alternative to storing or transmitting image pixel values
directly. The proposed steganographic algorithm conceals secret information in the contrast/scaling and
brightness/shifting coefficients in the code table, resulting in a stego fractal code table. Using fractal transform as a
means of steganography provides a new embedding domain other than the existing steganography tools. The advantages
of using fractal compression for securing information within images are: no current fractal detection methods exist, the
hidden information is disseminated throughout the image in the spatial domain, the capacity of the image can be
increased, and the decoding of the stego table results in a visually undistorted image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.