One of the most important practical problems of blind Digital Watermarking is the resistance against desynchronization attacks, one of which is the Stirmark random bending attack in the case of image watermarking. Recently, new blind digital watermarking schemes have been proposed which do not suffer from host-signal interference. One of these quantization based watermarking scheme is the Scalar Costa Scheme (SCS). We present an attack channel for SCS which tries to model typical artefacts of local desynchronization. Within the given channel model, the maximum achievable watermark rate for imperfectly synchronized watermark detection is computed. We show that imperfect synchronization leads to inter-sample-interference by other signal samples, independent from the considered watermark technology. We observe that the characteristics of the host signal play a major role in the performance of imperfectly synchronized watermark detection. Applying these results, we propose a resynchronization method based on a securely embedded pilot signal. The watermark receiver exploits the embedded pilot watermark signal to estimate the transformation of the sampling grid. This estimate is used to invert the desynchronization attack before applying standard SCS watermark detection. Experimental results for the achieved bit error rate of SCS watermark detection confirm the usefulness of the proposed resynchronization algorithm.
KEYWORDS: Steganography, Associative arrays, Distortion, Digital watermarking, Image compression, Computer security, Data communications, Data hiding, Data modeling, Image quality
Steganography is the art of communicating a message by embedding it into multimedia data. It is desired to maximize the amount of hidden information (embedding rate) while preserving security against detection by unauthorized parties. An appropriate information-theoretic model for steganography has been proposed by Cachin. A steganographic system is perfectly secure when the statistics of the cover data and the stego data are identical, which means that the relative entropy between the cover data and the stego data is zero. For image data, another constraint is that the stego data must look like a typical image. A tractable objective measure for this property is the (weighted) mean squared error between the cover image and the stego image (embedding distortion). Two different schemes are investigated. The first one is derived from a blind watermarking scheme. The second scheme is designed specifically for steganography such that perfect security is achieved, which means that the relative entropy between cover data and stego data tends to zero. In this case, a noiseless communication channel is assumed. Both schemes store the stego image in the popular JPEG format. The performance of the schemes is compared with respect to security, embedding distortion and embedding rate.
New blind digital watermarking schemes that are optimized for additive white Gaussian noise (AWGN) attacks have been developed by several research groups within the last two years. Currently, the most efficient schemes, e.g., the scalar Costa scheme (SCS), involve scalar quantization of the host signal during watermarking embedding and watermark reception. Reliable watermark reception for these schemes is vulnerable to amplitude modification of the attacked host signal. In this paper, a method for the estimation of possible amplitude modifications before SCS watermark detection is proposed. The estimation is based on a securely embedded SCS pilot watermark. We focus on linear amplitude modifications, but investigate also the extension to nonlinear amplitude modifications. Further, the superiority of our proposal over an estimation method based on a spread-spectrum pilot watermark is demonstrated.
In many blind watermarking proposals, the unwatermarked host data is viewed as unavoidable interference. Recently, however, it has been shown that blind watermarking corresponds to communication with side information (i.e., the host data) at the encoder. For a Gaussian host data and Gaussian channel, Costa showed that blind watermarking can theoretically eliminate all interference from the host data. Our previous work presented a practical blind watermarking scheme based on Costa's idea and called 'scalar Costa scheme' (SCS). SCS watermarking was analyzed theoretically and initial experimental results were presented. This paper discusses further practical implications when implementing SCS. We focus on the following three topics: (A) high-rate watermarking, (B) low-rate watermarking, and (C) restrictions due to finite codeword lengths. For (A), coded modulation is applied for a rate of 1 watermark bit per host-data element, which is interesting for information-hiding applications. For (B), low rates can be achieved either by repeating watermark bits or by projecting them in a random direction in signal space spread-transform SCS). We show that spread-transform SCS watermarking performs better than SCS watermarking with repetition coding. For (C), Gallager's random-coding exponent is used to analyze the influence of codeword length on SCS performance.
A watermarking scheme for distinguishing different copies of the same multimedia document (fingerprinting) is investigated. Efficient transmission of natural data requires lossy compression, which might impair the embedded watermark. We investigate whether the quantization step in compression schemes can be replaced by dithered quantization to combine fingerprinting and compression. Dithered quantization offers the possibility of producing perceptually equivalent signals that are not exactly equal. The non-subtractive quantization error can be used as the watermark. We denote the proposed watermarking scheme as 'quantization watermarking.' Such a scheme is only practical for watermarking applications where the original signal is available to the detector. We analyze the influence of the dither signal on the perceptual quality of the watermarked document and the watermark detection robustness. Further, the cross-talk between the non- subtractive quantization errors for two different dither realizations is investigated. An analytical description for fine quantization and experimental results for coarse quantization show how the cross-talk depends on the characteristics of the dither signal. The derived properties of quantization watermarking are verified for combined JPEG compression and fingerprinting. The detection robustness for the proposed quantization error watermark is compared with that of an independent additive watermark.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.