PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 8755 including the Title Page, Copyright information, Table of Contents, Introduction, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Most existing image encryption algorithms often transfer the original image into a noise-like image which is an apparent visual sign indicating the presence of an encrypted image. Motivated by the data hiding technologies, this paper proposes a novel concept of image encryption, namely transforming an encrypted original image into another meaningful image which is the final resulting encrypted image and visually the same as the cover image, overcoming the mentioned problem. Using this concept, we introduce a new image encryption algorithm based on the wavelet decomposition. Simulations and security analysis are given to show the excellent performance of the proposed concept and algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper is concerned with robust steganographic techniques to hide and communicate biometric data in mobile media
objects like images, over open networks. More specifically, the aim is to embed binarised features extracted using
discrete wavelet transforms and local binary patterns of face images as a secret message in an image. The need for such
techniques can arise in law enforcement, forensics, counter terrorism, internet/mobile banking and border control. What
differentiates this problem from normal information hiding techniques is the added requirement that there should be
minimal effect on face recognition accuracy.
We propose an LSB-Witness embedding technique in which the secret message is already present in the LSB plane but
instead of changing the cover image LSB values, the second LSB plane will be changed to stand as a witness/informer to
the receiver during message recovery. Although this approach may affect the stego quality, it is eliminating the weakness
of traditional LSB schemes that is exploited by steganalysis techniques for LSB, such as PoV and RS steganalysis, to
detect the existence of secrete message.
Experimental results show that the proposed method is robust against PoV and RS attacks compared to other variants of
LSB. We also discussed variants of this approach and determine capacity requirements for embedding face biometric
feature vectors while maintain accuracy of face recognition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Transformative Apps (TransApps) is a Defense Advanced Research Projects Agency (DARPA) funded program whose
goal is to develop a range of militarily-relevant software applications (“apps”) to enhance the operational-effectiveness
of military personnel on (and off) the battlefield. TransApps is also developing a military apps marketplace to facilitate
rapid development and dissemination of applications to address user needs by connecting engaged communities of endusers
with development groups. The National Institute of Standards and Technology’s (NIST) role in the TransApps
program is to design and implement evaluation procedures to assess the performance of: 1) the various software
applications, 2) software-hardware interactions, and 3) the supporting online application marketplace. Specifically, NIST
is responsible for evaluating 50+ tactically-relevant applications operating on numerous Android™-powered platforms.
NIST efforts include functional regression testing and quantitative performance testing. This paper discusses the
evaluation methodologies employed to assess the performance of three key program elements: 1) handheld-based
applications and their integration with various hardware platforms, 2) client-based applications and 3) network
technologies operating on both the handheld and client systems along with their integration into the application
marketplace. Handheld-based applications are assessed using a combination of utility and usability-based checklists and
quantitative performance tests. Client-based applications are assessed to replicate current overseas disconnected (i.e. no
network connectivity between handhelds) operations and to assess connected operations envisioned for later use. Finally,
networked applications are assessed on handhelds to establish baselines of performance for when connectivity will be
common usage.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
H.264/AVC coded video quality is crucial for evaluating the performance of consumer-level video camcorders and mobile phones. In this paper, a DCT-based video quality prediction model (DVQPM) is proposed to blindly predict the quality of compressed natural videos. The model is frame-based and composed of three steps. First, each decoded frame of the video sequence is decomposed into six feature maps based on the DCT coefficients. Then five efficient frame-level features (kurtosis, smoothness, sharpness, mean Jensen Shannon divergence, and blockiness) are extracted to quantify the distortion of natural scenes due to lossy compression. In the last step, each frame-level feature is averaged across all frames (temporal pooling); a trained multilayer neural network takes the five features as inputs and outputs a single number as the predicted video quality. The DVQPM model was trained and tested on the H.264 videos in the LIVE Video Database. Results show that the objective assessment of the proposed model has a strong correlation with the subjective assessment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper introduces a new redundant number system, the adjunctive numerical relation (ANR) codes, which offer
improvements over other well known systems such as the Fibonacci, Lucas, and the Prime number systems when used in
multimedia data hiding applications. It will be shown that this new redundant number system has potential applications
in digital communications, signal, and image processing. the paper will also offer two illustrative applications for this
new redundant coding system. First an enhanced bit-plane decomposition of image formatted files with data embedding
(steganography and watermarking). Secondly, an example of an expanded bit-line decomposition of audio formatted
files with data embedding and index-based retrieval capability will be described. The computer simulations will detail
the statistical stability required for effective data encoding techniques and demonstrate the improvements in the
embedding capacity in multimedia carriers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses the problem of testing the degree of randomness within an image, particularly for a shuffled or encrypted image. Its key contributions are: 1) a mathematical model of perfectly shuffled images; 2) the derivation of the theoretical distribution of pixel differences; 3) new hypothesis tests based approach to differentiate whether or not a test image is perfectly shuffled; and 4) a randomized algorithm to unbiasedly evaluate the degree of image randomness. Simulation results show that the proposed method is robust and effective in evaluating the degree of image randomness, and may often be more suitable for image applications than commonly used testing schemes designed for binary data like NIST 800-22 test suites. The developed method may be also useful as a first step to determine whether or not an image shuffling or encryption scheme is suitable for a particular cryptographic application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Performance indicators characterizing modern steganographic techniques include capacity (i.e. the quantity
of data that can be hidden in the cover medium), stego quality (i.e. artifacts visibility), security (i.e.
undetectability), and strength or robustness (intended as the resistance against active attacks aimed to
destroy the secret message). Fibonacci based embedding techniques have been researched and proposed in
the literature to achieve efficient steganography in terms of capacity with respect to stego quality. In this
paper, we investigated an innovative idea that extends Fibonacci-like steganography by bit-plane(s)
mapping instead of bit-plane(s) replacement. Our proposed algorithm increases embedding capacity using
bit-plane mapping to embed two bits of the secret message in three bits of a pixel of the cover, at the
expense of a marginal loss in stego quality. While existing Fibonacci embedding algorithms do not use
certain intensities of the cover for embedding due to the limitation imposed by the Zeckendorf theorem,
our proposal solve this problem and make all intensity values candidates for embedding. Experimental
results demonstrate that the proposed technique double the embedding capacity when compared to existing
Fibonacci methods, and it is secure against statistical attacks such as RS, POV, and difference image
histogram (DIH).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Game theory can provide a useful tool to study the security problem in mobile ad hoc networks (MANETs). Most existing
work on applying game theories to security only considers two players in the security game model: an attacker and a
defender. While this assumption is valid for a network with centralized administration, it may not be realistic in MANETs,
where centralized administration is not available. Consequently, each individual node in a MANET should be treated
separately in the security game model. In this paper, using recent advances in mean field game theory, we propose a novel
game theoretic approach for security in MANETs. Mean field game theory provides a powerful mathematical tool for
problems with a large number of players. Since security defence mechanisms consume precious system resources (e.g.,
energy), the proposed scheme considers not only the security requirement of MANETs but also the system resources.
In addition, each node only needs to know its own state information and the aggregate effect of the other nodes in the
MANET. Therefore, the proposed scheme is a fully distributed scheme. Simulation results are presented to illustrate the
effectiveness of the proposed scheme.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the added security provided by LTE, geographical location has become an important factor for authentication to
enhance the security of remote client authentication during mCommerce applications using Smartphones. Tight
combination of geographical location with classic authentication factors like PINs/Biometrics in a real-time, remote
verification scheme over the LTE layer connection assures the authenticator about the client itself (via PIN/biometric) as
well as the client’s current location, thus defines the important aspects of “who”, “when”, and “where” of the
authentication attempt without eaves dropping or man on the middle attacks. To securely integrate location as an
authentication factor into the remote authentication scheme, client’s location must be verified independently, i.e. the
authenticator should not solely rely on the location determined on and reported by the client’s Smartphone. The latest
wireless data communication technology for mobile phones (4G LTE, Long-Term Evolution), recently being rolled out
in various networks, can be employed to enhance this location-factor requirement of independent location verification.
LTE’s Control Plane LBS provisions, when integrated with user-based authentication and independent source of
localisation factors ensures secure efficient, continuous location tracking of the Smartphone. This feature can be
performed during normal operation of the LTE-based communication between client and network operator resulting in
the authenticator being able to verify the client’s claimed location more securely and accurately. Trials and experiments
show that such algorithm implementation is viable for nowadays Smartphone-based banking via LTE communication.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Some features of Mobile Ad hoc Networks (MANETs), including dynamic membership, topology, and open
wireless medium, introduce a variety of security risks. Malicious nodes can drop or modify packets that are
received from other nodes. These malicious activities may seriously affect the availability of services in MANETs.
Therefore, secure routing in MANETs has emerged as an important MANET research area. In this paper, we
propose a scheme that enhances the security of Optimal Link State Routing version 2 (OLSRv2) in MANETs
based on trust. In the proposed scheme, more accurate trust can be obtained by considering different types of
packets and other important factors that may cause dropping packets in friendly nodes, such as buffer overflows
and unreliable wireless connections. Simulation results are presented to demonstrate the effectiveness of the
proposed scheme.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Curse of dimensionality often hinders the process of data mining. The data collected and analyzed generally contains
huge number of dimensions or attributes and it may be the case that not all of the attributes are necessary for the data
mining task to be performed on the data. Traditionally data dimensionality reduction techniques like Principal
Component Analysis or Linear Discriminant analysis have been used to address this problem. But, these methods move
the original data to a transformed space. However, the need might be to remain in the original attribute space and
identify the key attributes for data analysis. This need has given rise to the research area of feature subset selection. In
this paper we have used solid angle measure to tackle the problem of dimension reduction in OCT retinal data.
Optical Coherence Tomography (OCT) is a frequently used and established medical imaging technique. It is
widely used, among other application, to obtain high-resolution images of the retina and the anterior segment of the eye.
Solid angle measure is used to characterize and select features obtained from OCT retinal images. The application of
solid angle in feature selection, as proposed in this paper, is a unique approach to OCT image data mining. The
experimental results with real life datasets presented in this paper will demonstrate the effectiveness of the proposed
method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fusion techniques have proven to be very useful for many signal and image processing applications including image
recognition, image registration, and biometric matching. Along with standard fusion techniques, hypercomplex image
processing techniques have been developed recently. These techniques represent a form of image fusion in which several
image components are combined to form a multi-channel image. The multi-channel imagery may be processed using
hypercomplex transforms, such as the hypercomplex Fourier transform, for image matching and registration. In this
paper we investigate performance of multi-channel image fusion for face image matching. We use 3-D color face
imagery and investigate fusion of various combinations of grayscale intensity, color, and range information. We conduct
a theoretical investigation to identify conditions under which matchers using image channel fusion provide superior
matching performance relative to matchers fusing single channel image matching results. We present numerical
performance results in the form of Receiver Operating Characteristics curves quantifying matching performance for
verification hypothesis testing problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Face detection at a distance is very challenging because the image quality becomes low. This paper discusses a face
detection method in the long distance with AdaBoost filtering and a false alarm reduction scheme. The false alarm
reduction scheme is based on skin-color testing and variable edge mask filtering. The skin-color test involves the average
RGB components of the window, followed by the binary cluster image generation. The binary cluster is composed of the
alternative and null pixels according to color. The size of the edge mask is determined by the ellipse covering the binary
cluster. The edge mask filters out false alarms by evaluating the contour shape of the object in the window. In the
experiments, the false alarm reduction scheme is shown to be effective for face detection in images captured at a distance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Iris recognition has been tested to the most accurate biometrics using high resolution near infrared images.
However, it does not work well under visible wavelength illumination. Sclera recognition, however, has been
shown to achieve reasonable recognition accuracy under visible wavelengths. Combining iris and sclera
recognition together can achieve better recognition accuracy. However, image quality can significantly affect
the recognition accuracy. Moreover, in unconstrained situations, the acquired eye images may not be frontally
facing. In this research, we proposed a feature quality-based multimodal unconstrained eye recognition
method that combine the respective strengths of iris recognition and sclera recognition for human
identification and can work with frontal and off-angle eye images. The research results show that the
proposed method is very promising.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We introduce a technique for covertly embedding data throughout an audio file using redundant number system
decomposition across non-standard digital bit-lines. This bit-line implementation integrates an index recoverable
embedded algorithm with an extended bit level representation that achieves a high capacity data channel within an audio
multimedia file. It will be shown this new steganography method has minimal aural distortive affects while preserving
both first and second order cover statistics, making it less susceptible to most steganalysis attacks. Our research
approach involves reviewing the common numerical methods used in common binary-based algorithms. We then
describe basic concepts and challenges when attempting to implement complex embedding algorithms that are based on
redundant number systems. Finally, we introduce a novel class of numerical based multiple bit-line decomposition
systems, which we define as Adjunctive Numerical Representations. The system is primarily described using basic PCM
techniques in uncompressed audio files however extended applications for alternate multimedia is addressed. This new
embedding system will not only provide the statistical stability required for effective steganography but will also give us
an improvement in the embedding capacity in this class of multimedia carrier files. This novelty of our approach is
demonstrated by an ability to embed high capacity covert data while simultaneously providing a means for rapid,
indexed data recovery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Analysing a text or part of it is key to handwriting identification. Generally, handwriting is learnt over time and people
develop habits in the style of writing. These habits are embedded in special parts of handwritten text. In Arabic each
word consists of one or more sub-word(s). The end of each sub-word is considered to be a connect stroke. The main
hypothesis in this paper is that sub-words are essential reflection of Arabic writer's habits that could be exploited for
writer identification. Testing this hypothesis will be based on experiments that evaluate writer's identification, mainly
using K nearest neighbor from group of sub-words extracted from longer text. The experimental results show that using a
group of sub-words could be used to identify the writer with a successful rate between 52.94 % to 82.35% when top1 is
used, and it can go up to 100% when top5 is used based on K nearest neighbor. The results show that majority of writers
are identified using 7 sub-words with a reliability confident of about 90% (i.e. 90% of the rejected templates have
significantly larger distances to the tested example than the distance from the correctly identified template). However
previous work, using a complete word, shows successful rate of at most 90% in top 10.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Wireless sensor network used in military applications may be deployed in hostile environments, where privacy and security is
of primary concern. This can lead to the formation of a trust-based sub-network among mutually-trusting nodes. However,
designing a TDMA MAC protocol is very challenging in situations where such multiple sub-networks coexist, since TDMA
protocols require node identity information for slot assignments. This paper introduces a novel distributed TDMA MAC
protocol, ZEA-TDMA (Zero Exposure Anonymous TDMA), for anonymous wireless networks. ZEA-TDMA achieves slot
allocation with strict anonymity constraints, i.e. without nodes having to exchange any identity revealing information. By using
just the relative time of arrival of packets and a novel technique of wireless collision-detection and resolution for fixed packetsizes,
ZEA-TDMA is able to achieve MAC slot-allocation which is described as follows. Initially, a newly joined node listens to
its one-hop neighborhood channel usage and creates a slot allocation table based on its own relative time, and finally, selects a
slot that is collision free within its one-hop neighborhood. The selected slot can however cause hidden collisions with a two-hop
neighbor of the node. These collisions are resolved by a common neighbor of the colliding nodes, which first detects the
collision, and then resolve them using an interrupt packet. ZEA-TDMA provides the following features: a) it is a TDMA
protocol ideally suited for highly secure or strictly anonymous environments b) it can be used in heterogeneous environments
where devices use different packet structures c) it does not require network time-synchronization, and d) it is insensitive to
channel errors. We have implemented ZEA-TDMA on the MICA2 hardware platform running TinyOS and evaluated the
protocol functionality and performance on a MICA2 test-bed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper is concerned with face recognition under uncontrolled condition, e.g. at a distance surveillance scenarios, and
post-rioting forensic, whereby captured face images are severely degraded/blurred and of low-resolution. This is a tough
challenge due to many factors including capturing conditions. We present the results of our investigations into recently
developed Compressive Sensing (CS) theory to develop scalable face recognition schemes using a variety of overcomplete
dictionaries that construct super-resolved face images from any input low-resolution degraded face image. We
shall demonstrate that deterministic as well as non-deterministic dictionaries that do not involve the use of face image
information but satisfy some form of the Restricted Isometry Property used for CS can achieve face recognition accuracy
levels, as good as if not better than those achieved by dictionaries proposed in the literature, that are learned from face
image databases using elaborate procedures. We shall elaborate on how this approach helps in crime fighting and
terrorism.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The aim of this article is to give a practical overview of forensic investigation of social networks cases using certain
commercial software packages in a university forensics lab environment. Students have to learn the usefulness of
forensic procedures to ensure evidence collection, evidence preservation, forensic analysis, and reporting. It is
demonstrated how to investigate important data from social network users. Different scenarios of investigations are
presented that are well-suited for forensics lab work in university. In particular, we focus on the new version of Belkasoft
Evidence Center and compare it with other well-known tools regarding functionality, usability and capabilities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a new algorithm for human gait recognition based on Spatio-temporal body biometric features
using wavelet transforms. The proposed algorithm extracts the Gait cycle depending on the width of boundary box
from a sequence of Silhouette images. Gait recognition is based on feature level fusion of three feature vectors: the gait
spatio-temporal feature represented by the distances between (feet, knees, hands, shoulders, and height); binary
difference between consecutive frames of the silhouette for each leg detected separately based on hamming distance; a
vector of statistical parameters captured from the wavelet low frequency domain. The fused feature vector is subjected
to dimension reduction using linear discriminate analysis. The Nearest Neighbour with a certain threshold used for
classification. The threshold is obtained by experiment from a set of data captured from the CASIA database. We shall
demonstrate that our method provides a non-traditional identification based on certain threshold to classify the outsider
members as non-classified members.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes to integrate biometric-based key generation into an obfuscated interpretation algorithm to protect
authentication application software from illegitimate use or reverse-engineering. This is especially necessary for
mCommerce because application programmes on mobile devices, such as Smartphones and Tablet-PCs are typically
open for misuse by hackers. Therefore, the scheme proposed in this paper ensures that a correct interpretation / execution
of the obfuscated program code of the authentication application requires a valid biometric generated key of the actual
person to be authenticated, in real-time. Without this key, the real semantics of the program cannot be understood by an
attacker even if he/she gains access to this application code. Furthermore, the security provided by this scheme can be a
vital aspect in protecting any application running on mobile devices that are increasingly used to perform
business/financial or other security related applications, but are easily lost or stolen. The scheme starts by creating a
personalised copy of any application based on the biometric key generated during an enrolment process with the
authenticator as well as a nuance created at the time of communication between the client and the authenticator. The
obfuscated code is then shipped to the client’s mobile devise and integrated with real-time biometric extracted data of the
client to form the unlocking key during execution. The novelty of this scheme is achieved by the close binding of this
application program to the biometric key of the client, thus making this application unusable for others. Trials and
experimental results on biometric key generation, based on client's faces, and an implemented scheme prototype, based
on the Android emulator, prove the concept and novelty of this proposed scheme.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This article deals about embedded SIP communication server with an easy integration into the computer network based
on open source solutions and its effective defense against the most frequent attack in the present - Denial of Service. The
article contains brief introduction into the Bright Embedded Solution for IP Telephony – BESIP and describes the most
common types of DoS attacks, which are applied on SIP elements of the VoIP infrastructure including the results of
defensive mechanism that has been designed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This article discusses a danger alert system created as a part of the research project at Department of
Telecommunications of Technical University of Ostrava. The aim of the system is to distribute pre-recorded voice
messages in order to alert the called party in danger. This article describes individual technologies, which the application
uses for its operation as well as issues relating to hardware requirements and transfer line bandwidth load. The article
also describes new algorithms, which had to be developed in order to ensure the reliability of the system. Our intent is
focused on disaster management, the message, which should be delivered within specified time span, is typed in the
application and text-to-speech module ensures its transformation to a speech format, after that a particular scenario or
warned area is selected and a target group is automatically unloaded. For this purpose, we have defined XML format for
delivery of phone numbers which are located in the target area and these numbers are obtained from mobile BTS's (Base
transmission stations). The benefit of such communication compared to others, is the fact, that it uses a phone call and,
therefore, it is possible to get feedback who accepted the message and to improve efficiency of alert system. Finally, the
list of unanswered calls is exported and these users can be informed via SMS.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Various embedded systems, such as unattended ground sensors (UGS), are deployed in dangerous areas, where they are
subject to compromise. Since numerous systems contain a network of devices that communicate with each other (often
times with commercial off the shelf [COTS] radios), an adversary is able to intercept messages between system devices,
which jeopardizes sensitive information transmitted by the system (e.g. location of system devices). Secret key
algorithms such as AES are a very common means to encrypt all system messages to a sufficient security level, for
which lightweight implementations exist for even very resource constrained devices. However, all system devices must
use the appropriate key to encrypt and decrypt messages from each other. While traditional public key algorithms
(PKAs), such as RSA and Elliptic Curve Cryptography (ECC), provide a sufficiently secure means to provide
authentication and a means to exchange keys, these traditional PKAs are not suitable for very resource constrained
embedded systems or systems which contain low reliability communication links (e.g. mesh networks), especially as the
size of the network increases. Therefore, most UGS and other embedded systems resort to pre-placed keys (PPKs) or
other naïve schemes which greatly reduce the security and effectiveness of the overall cryptographic approach. McQ has
teamed with the Cryptographic Engineering Research Group (CERG) at George Mason University (GMU) to develop
an approach using revolutionary cryptographic techniques that provides both authentication and encryption, but on
resource constrained embedded devices, without the burden of large amounts of key distribution or storage.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Based on current trends for both military and commercial applications, the use of mobile devices (e.g. smartphones and
tablets) is greatly increasing. Several military applications consist of secure peer to peer file sharing without a
centralized authority. For these military applications, if one or more of these mobile devices are lost or compromised,
sensitive files can be compromised by adversaries, since COTS devices and operating systems are used. Complete
system files cannot be stored on a device, since after compromising a device, an adversary can attack the data at rest, and
eventually obtain the original file. Also after a device is compromised, the existing peer to peer system devices must still
be able to access all system files.
McQ has teamed with the Cryptographic Engineering Research Group at George Mason University to develop a custom
distributed file sharing system to provide a complete solution to the data at rest problem for resource constrained
embedded systems and mobile devices. This innovative approach scales very well to a large number of network devices,
without a single point of failure. We have implemented the approach on representative mobile devices as well as
developed an extensive system simulator to benchmark expected system performance based on detailed modeling of the
network/radio characteristics, CONOPS, and secure distributed file system functionality. The simulator is highly
customizable for the purpose of determining expected system performance for other network topologies and CONOPS.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There are many ways of getting real data about malicious activity in a network. One of them relies on masquerading
monitoring servers as a production one. These servers are called honeypots and data about attacks on them brings us
valuable information about actual attacks and techniques used by hackers. The article describes distributed topology of
honeypots, which was developed with a strong orientation on monitoring of IP telephony traffic. IP telephony servers
can be easily exposed to various types of attacks, and without protection, this situation can lead to loss of money and
other unpleasant consequences. Using a distributed topology with honeypots placed in different geological locations and
networks provides more valuable and independent results. With automatic system of gathering information from all
honeypots, it is possible to work with all information on one centralized point. Communication between honeypots and
centralized data store use secure SSH tunnels and server communicates only with authorized honeypots. The centralized
server also automatically analyses data from each honeypot. Results of this analysis and also other statistical data about
malicious activity are simply accessible through a built-in web server. All statistical and analysis reports serve as
information basis for an algorithm which classifies different types of used VoIP attacks. The web interface then brings a
tool for quick comparison and evaluation of actual attacks in all monitored networks. The article describes both, the
honeypots nodes in distributed architecture, which monitor suspicious activity, and also methods and algorithms used on
the server side for analysis of gathered data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently, there have been a number of small-scale and hobbyist successes in employing commodity CMOS-based camera sensors for radiation detection. For example, several smartphone applications initially developed for use in areas near the Fukushima nuclear disaster are capable of detecting radiation using a cell phone camera, provided opaque tape is placed over the lens. In all current useful implementations, it is required that the sensor not be exposed to visible light. We seek to build a system that does not have this restriction. While building such a system would require sophisticated signal processing, it would nevertheless provide great benefits. In addition to fulfilling their primary function of image capture, cameras would also be able to detect unknown radiation sources even when the danger is considered to be low or non-existent. By experimentally profiling the image artifacts generated by gamma ray and β particle impacts, algorithms are developed to identify the unique features of radiation exposure, while discarding optical interaction and thermal noise effects. Preliminary results focus on achieving this goal in a laboratory setting, without regard to integration time or computational complexity. However, future work will seek to address these additional issues.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we introduce a new spatial domain color contrast enhancement algorithm based on the three dimensional
alpha weighted quadratic filter (3DAWQF). The goal of this work is to utilize the characteristics of the nonlinear filter to
enhance image contrast while recovering the color information. For images with less than desirable illumination, a
modified Naka-Rushton function is proposed to adjust the underexposed or overexposed intensities in the image. We
also present a new image contrast measure called the Root Mean Enhancement (RME) to model Root Mean Square
(RMS) contrast in image sub-blocks. A color RME contrast measure CRME is also proposed based on the RME contrast
in the RGB color sub-cubes. The new measures help choose the optimal operating parameters for enhancement
algorithms, thus improving the practicality for using quadratic filters in image processing applications. We demonstrate
the effectiveness of the proposed methods on a variety of images. Experimental results show that the 3DAWQF can
enhance the image contrast and color efficiently and effectively. Comparisons with existing state of the art algorithms
will be also presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The increasing number of mobile phones equipped with powerful cameras leads to huge collections of user-generated
images. To utilize the information of the images on site, image retrieval systems are becoming more and more popular to
search for similar objects in an own image database. As the computational performance and the memory capacity of
mobile devices are constantly increasing, this search can often be performed on the device itself. This is feasible, for
example, if the images are represented with global image features or if the search is done using EXIF or textual
metadata. However, for larger image databases, if multiple users are meant to contribute to a growing image database or
if powerful content-based image retrieval methods with local features are required, a server-based image retrieval backend
is needed. In this work, we present a content-based image retrieval system with a client server architecture working
with local features. On the server side, the scalability to large image databases is addressed with the popular bag-of-word
model with state-of-the-art extensions. The client end of the system focuses on a lightweight user interface presenting the
most similar images of the database highlighting the visual information which is common with the query image.
Additionally, new images can be added to the database making it a powerful and interactive tool for mobile contentbased
image retrieval.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper new simple algorithm for enhancement of thermal and infrared
images is introduced. Optimized stretching, filtering and color transformation are major parts
of proposed algorithm. The performance evaluation shows that the
represented enhancement algorithm yields better results with more natural appearance in
comparison with the traditional methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The goal of this paper is to introduce a new fuzzy local iterative algorithm that matches local color statistics of a
reference image to the distribution of the input image. Reference images are considered to have a desirable color
distribution for a specific application. The proposed algorithm consists of three stages: (1) images clustering by fuzzy cmeans,
(2) clusters’ matching, and (3) color distribution transfer between the matching clusters. First, a color similarity
measurement is used to segment image regions in the reference and input images. Second, we match the most similar
clusters in order to avoid the appearing of undesirable artifacts due to differences in the color dynamic range. Third, the
color characteristics of the reference clusters are transferred to the equivalent clusters in the input image by applying an
iterative process. The new image normalization tool has several advantages: it is computationally efficient and it has the
potential of increasing substantially the accuracy of segmentation and classification systems based on analysis of color
features. Computer simulations indicate that the iterative and gradual color matching procedure is able to standardize the
appearance of color images according to a desirable color distribution and reduce the amount of artifacts appearing in the
resulting image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The aim of this paper is to show the usefulness of modern forensic software tools for processing large-scale digital
investigations. In particular, we focus on the new version of Nuix 4.2 and compare it with AccessData FTK 4.2, X-Ways
Forensics 16.9 and Guidance Encase Forensic 7 regarding its performance, functionality, usability and capability. We
will show how these software tools work with large forensic images and how capable they are in examining complex and
big data scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.