We propose an approach for the automated diagnosis of celiac disease (CD) and colonic polyps (CP) based on applying Fisher encoding to the activations of convolutional layers. In our experiments, three different convolutional neural network (CNN) architectures (AlexNet, VGG-f, and VGG-16) are applied to three endoscopic image databases (one CD database and two CP databases). For each network architecture, we perform experiments using a version of the net that is pretrained on the ImageNet database, as well as a version of the net that is trained on a specific endoscopic image database. The Fisher representations of convolutional layer activations are classified using support vector machines. Additionally, experiments are performed by concatenating the Fisher representations of several layers to combine the information of these layers. We will show that our proposed CNN-Fisher approach clearly outperforms other CNN- and non-CNN-based approaches and that our approach requires no training on the target dataset, which results in substantial time savings compared with other CNN-based approaches.
Current algorithms for automated processing of Vickers hardness testing images are unsuitable for a broad range of images that are taken in industrial environments because such images show great variations in the Vickers indentation as well as in the specimen surface. The authors present a three-stage multiresolution template matching algorithm that shows excellent results, even for such challenging images. The capabilities of this algorithm are compared to known algorithms from the literature and results are presented. The comparison is conducted on two significant indentation image databases with 150 and 216 highly varying images. The applicability of the proposed algorithm is further illustrated by its competitive runtime performance.
A large variety of computationally lightweight functions used for assessing image sharpness in the spatial domain is evaluated for application in a passive autofocus system in the context of microindentation-based Vickers hardness testing. Alternatively, the file size of compressed JPEG images is proposed to determine image sharpness. The functions are evaluated on a significant dataset of microindentation images with respect to their properties required for focus search and sharpness assessment, their robustness to downsampling the image data, and their computational demand. Experiments suggest that of all spatial domain techniques considered, the simple Brenner autofocus function is the best compromise between accuracy and computational effort, while JPEG file size is a versatile solution when the application context allows.
Current algorithms in automated indentation measurement in the context of Vickers microindentation hardness testing suffer from a lack of robustness with respect to entirely missed indentation corner points when applied to real world data sets. Four original algorithms are proposed, discussed and evaluated on a significant data set of indentation images. Three out of these four exhibit accuracy close to human operated hardness testing which has been conducted as a reference technique.
A large variety of autofocus functions used for assessing image sharpness is evaluated for the application in
a passive autofocus system in the context of microindentation-based Vickers hardness testing. The functions
are evaluated on a significant dataset of microindentation images with respect to the accuracy of sharpness
assessment, their robustness to downsampling the image data, and their computational demand. Experiments
suggest that the simple Brenner autofocus function is the best compromise between accuracy and computational
effort in the considered application context.
The impact of using different wavelet packet subband structures in JPEG2000 on the matching accuracy of a
fingerprint recognition system is investigated. In particular, we relate rate-distortion performance as measured
in PSNR to the matching scores as obtained by the recognition system. Employing wavelet packets instead of
the dyadic wavelet transform turns out to be of advantage only in case of low bitrates (i.e. high compression
rates). For such settings, the good performance of the WSQ structure is confirmed also in JPEG2000 and in
particular we get better recognition accuracy using the WSQ structure as compared to the employment of a
rate-distortion optimising subband selection approach
KEYWORDS: 3D modeling, Visualization, Computer security, Optical spheres, Image encryption, 3D acquisition, 3D image processing, Distance measurement, Data modeling, Glasses
Computationally efficient encryption techniques for polygonal mesh data are proposed which exploit the prioritization
of data in progressive meshes. Significant reduction of computational demand can be achieved as
compared to full encryption, but it turns out that different techniques are required to support both privacy-focussed
applications and try-and-buy scenarios.
The impact of using different lossy compression algorithms on the recognition accuracy of iris recognition systems
is investigated. In particular, we consider the general purpose still image compression algorithms JPEG,
JPEG2000, SPIHT, and PRVQ and assess their impact on ROC of two different iris recognition systems when
applying compression to iris sample data.
KEYWORDS: Digital watermarking, Video, Scalable video coding, Sensors, Video coding, Temporal resolution, Multimedia, Distortion, Denoising, 3D modeling
This paper pulls together recent advances in scalable video coding and protection and investigates the impact on watermarking. After surveying the literature on the protection of scalable video via cryptographic and watermarking means, the robustness of a simple wavelet-based video watermarking scheme against combined bit stream adaptations performed on JSVM (the H.264/MPEG-4 AVC scalable video coding extension) and MC-EZBC scalable video bit streams is examined.
We investigate the potential of foot biometric features
based on geometry, shape, and texture and present algorithms for a
prototype rotation invariant verification system. An introduction to
origins and fields of application for footprint-based personal recognition
is accompanied by a comparison with traditional hand biometry
systems. Image enhancement and feature extraction steps emphasizing
specific characteristics of foot geometry and their
permanence and distinctiveness properties, respectively, are discussed.
Collectability and universality issues are considered as well.
A visualization of various test results comparing discriminative
power of foot shape and texture is given. The impact on real-world
scenarios is pointed out, and a summary of results is presented.
In this paper we evaluate a lightweight encryption scheme for JPEG2000 which relies on a secret transform
domain constructed with anisotropic wavelet packets. The pseudo-random selection of the bases used for transformation
takes compression performance into account, and discards a number of possible bases which lead to
poor compression performance. Our main focus in this paper is to answer the important question of how many
bases remain to construct the keyspace. In order to determine the trade-off between compression performance
and keyspace size, we compare the approach to a method that selects bases from the whole set of anisotropic
wavelet packet bases following a pseudo-random uniform distribution. The compression performance of both
approaches is compared to get an estimate of the range of compression quality in the set of all bases. We then
analytically investigate the number of bases that are discarded for the sake of retaining compression performance
in the compression-oriented approach as compared to selection by uniform distribution. Finally, the question of
keyspace quality is addressed, i.e. how much similarity between the basis used for analysis and the basis used for
synthesis is tolerable from a security point of view and how this affects the lightweight encryption scheme.
KEYWORDS: Digital watermarking, Detection and tracking algorithms, Signal detection, Discrete wavelet transforms, Composites, Wavelets, Image visualization, Multimedia, Information security, Quantization
Watermark interference is a threat to reliable detection in multiple re-watermarking scenarios. The impact of
using disjoint frequency bands and/or different embedding domains in limiting those interferences is evaluated
and compared. Employing disjoint frequency bands for embedding different watermarks turns out to be more
effective and is capable of maintaining reasonable detection correlation in multiple embedding applications.
KEYWORDS: Video, Video compression, Digital signal processing, Video coding, Signal processing, Video processing, Video surveillance, Data processing, Wavelets, Computer programming
In this paper, we discuss a hardware based low complexity JPEG 2000 video coding system. The
hardware system is based on a software simulation system, where temporal redundancy is exploited by coding of differential frames which are arranged in an adaptive GOP structure whereby the GOP
structure itself is determined by statistical analysis of differential frames. We present a hardware video
coding architecture which applies this inter-frame coding system to a Digital Signal Processor (DSP). The system consists mainly of a microprocessor (ADSP-BF533 Blackfin Processor) and a JPEG 2000 chip (ADV202).
The impact of using different lossy compression algorithms on the matching accuracy of fingerprint and face
recognition systems is investigated. In particular, we relate rate-distortion performance as measured in PSNR
to the matching scores as obtained by the recognition systems. JPEG2000 and SPIHT are correctly predicted
by PSNR to be the most suited compression algorithms to be used in fingerprint and face recognition systems.
Fractal compression is identified to be least suited for the use in the investigated recognition systems, although
PSNR suggests JPEG to deliver worse recognition results in the case of face imagery. JPEG compression performs
surprisingly well at high bitrates in face recognition systems, given the low PSNR performance observed.
We investigate and compare combined compression-encryption schemes. We assess the respective security, and we show how encryption affects the image coding efficiency. The techniques employ the wavelet-based compression algorithms JPEG2000 and SPIHT, and we randomly permute and rotate blocks of wavelet coefficients in different wavelet-subbands to encrypt image data within the compression pipeline. We identify weak points of the propsed encryption techniques, and possible attacks are highlighted. The investigated methods allow to trade off security for compression performance. The results also have interesting implications with respect to the significance of the zerotree-hypothesis as stated in the context of compression schemes.
This paper paper is a quantitative evaluation of a wavelet-based, robust authentication hashing algorithm. Based on the results of a series of robustness and tampering sensitivity tests, we describepossible shortcomings and propose variousmodifications to the algorithm to improve its performance. The second part of the paper describes and attack against the scheme. It allows an attacker to modify a tampered image, such that it's hash value closely matches the hash value of the original.
In this paper, we discuss how we can enhance the performance of the MPEG-4 Visual Texture Coding algorithm (VTC). Runtime analysis reveals the major coding stages and shows a weak point within the vertical filtering stage. A useful cache-access strategy is considered, which lifts this problem almost entirely. Additionally, we
perform the DWT and the zerotree coding stage in parallel using OpenMP. The improved sequential version of the vertical filtering improves also the parallel eaciency significantly. We present results from 2 different multiprocessor platforms (SGI Power Challenge: 20 IP25 RISC CPUs, running at 195 Mhz; SGI Origin3800:128 MIPS RISC R12000 CPUs, running at 400MHz).
In this paper we compare the coding performance of the JPEG2000 still image coding standard with the INTRA coding method used in the H.26L project. We discuss the basic techniques of both coding schemes and show the effect of improved I-frame coding to the overall performance of a H.26L-based system. The coding efficiency as well as the
runtime behaviour is considered in our comparison.
KEYWORDS: Digital watermarking, Wavelets, JPEG2000, Image filtering, Image compression, Image quality, Information security, Resistance, Signal to noise ratio, Internet
Wavelet filters can be parametrized to create an entire family
of different wavelet filters. We discuss wavelet filter parametrization as a means to add security to wavelet based watermarking schemes. We analyze the influence of the number of filter parameters and use non-stationary multi-resolution decomposition where different wavelet filters are used at different levels of the decomposition. Using JPEG and JPEG2000 compression we assess the normalized correlation and Peak Signal to Noise Ratio (PSNR) behavior of the watermarks. The security against unauthorized detection is also investigated. We conclude that the proposed systems show good robustness against compression and depending on the resolution we choose for the parameters we get between 299 and 2185 possible keys.
In this paper, we have a close look at the runtime performance of the intra-component transform employed in the reference implementations of the JPEG2000 image coding standard. Typically, wavelet lifting is used to obtain a wavelet decomposition of the source image in a computationally efficient way. However, so far no attention has been paid to the impact of the CPU's memory cache on the overall performance. We propose two simple techniques that dramatically reduce the number of cache misses and cut column filtering runtime by a factor of 10. Theoretical estimates as well as experimental results on a number of hardware platforms show the effectivity of our approach.
One approach to transformation based compression is the Matching Pursuit Projection (MPP). MPP or variants of it have been suggested for designing image compression and video compression algorithms and have been among the top performing submissions within the MPEG-4 standardization process. In the case of still image coding, the MPP approach has to be paid with an enormous computational complexity. In this work we discuss sequential, as well as parallel speedup techniques of a MPP image coder which is competitive in terms of rate-distortion performance.
In this paper, we will provide an overview of the wavelet-based watermarking techniques available today. We will see how previously proposed methods such as spread-spectrum watermarking have been applied to the wavelet transform domain in a variety of ways and how new concepts such as the multi-resolution property of the wavelet image decomposition can be exploited. One of the main advantages of watermarking in the wavelet domain is its compatibility with the upcoming image coding standard, JPEG2000. Although many wavelet-domain watermarking techniques have been proposed, only few fit the independent block coding approach of JPEG2000. We will illustrate how different watermarking techniques relate to image compression and examine the robustness of selected watermarking algorithms against image compression.
This paper deals with different aspects of wavelet packet (WP) based video coding. In introducing experiments we show that WP decomposition and specifically WP decomposition in conjunction with the best basis algorithm are superior in terms of quality as compared to the standard discrete wavelet transform but show prohibitive computational demands (especially for real-time applications). The main contribution of our work is therefore the examination of three parallelization methods for WP based video coding. Two inter-frame based parallelization methods (group-of-picture parallelization and frame-by-frame parallelization) exploit the properties of a videostream (full independence between GOPs and rather high independence between single frames) better than inter-frame parallelization, but show a higher demand in terms of memory and don't respect the frame order defined by the input video stream. We highlight the advantages and drawbacks of all three methods and show experimental results obtained on a Siemens hpcLine cluster and a Cray T3E.
KEYWORDS: Linear filtering, Wavelets, Image filtering, Data communications, Fast wavelet transforms, Image analysis, Algorithm development, Convolution, Wavelet transforms, Digital filtering
The 'a trous' algorithm represents a discrete approach to the classical continuous wavelet transform. Similar to the fast wavelet transform the input signal is analyzed by using the coefficients of a properly chosen low-pass filter, but in contradistinction to the latter there follows no concluding decimation step. Examples of practical applications can be found in the field of cosmology for studying the formation of large scale structures of the Universe. In this paper we develop parallel algorithms on different MIMD architectures for the 2D 'a trous' decomposition. We implement the algorithm on several distributed memory architectures using the PVM paradigm and on a SGI POWERChallenge using a parallel version of the C programming language. Finally we investigate experimental results obtained on both of them.
In this work we discuss a technique denoted localized domain-pools in the context of parallelization of the encoding phase of fractal image compression. Performance problems occurring on distributed memory MIMD architectures may be resolved using this technique.
In this work we discuss several approaches for designing fractal quantizers in the context of hybrid wavelet-fractal image compression algorithms. Moreover different subband- structures are compared concerning their suitability for subsequent fractal quantization.
In this work we apply techniques from classical fractal still-image coding to block-matching motion compensation algorithms for digital video compression. Especially the method of adapting the gray-values in image blocks of the current frame to those in blocks of the reference-frame shows promising performance.
Fractal image compression is computationally extensive. Therefore speedup techniques are required to achieve time demands comparable to other compression techniques. In this paper we combine sequential and parallel techniques suitable for MIMD architectures which moves this compression scheme closer to real-time processing. The algorithms introduced are especially designed for memory-critical environments.
Although block-based image compression techniques seem to be straightforward to implement on parallel MIMD architectures, problems might arise due to architectural restrictions on such parallel machines. In this paper we discuss possible solutions to such problems occurring in different image compression techniques. Experimental results are included for adaptive wavelet block coding and fractal compression.
In this paper a new approach for adaptive wavelet image coding is introduced. Based on an adaptive quadtree image-block partition (similar to fractal coding) the different image blocks may be compressed independently in an adaptive manner. In order to adapt to local image statistics and features we present several possibilities of how to optimize the transform part of a wavelet image block-coder. Additionally we present a parallel algorithm suitable for MIMD architectures for efficient implementation of the proposed method. Finally experimental results concerning coding efficiency and execution time on a cluster of workstations are described.
Unlike the classical wavelet decomposition scheme it is possible to have different scaling and wavelet functions at every scale by using non-stationary multiresolution analyses. For the bidimensional case inhomogeneous multiresolution analyses using different scaling and wavelet functions for the two variables are introduced. Beyond it, these two methods are combined. All this freedom is used for compact image coding. The idea is to build out of the functions in a library that special non-stationary and/or inhomogeneous multiresolution analysis, that is best suited for a given image in the context of compact coding (in the sense of optimizing certain cost-functions).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.