Some theoretical aspects of watermarking detection (original) (raw)

A Constructive and Unifying Framework for Zero-Bit Watermarking

IEEE Transactions on Information Forensics and Security, 2000

In the watermark detection scenario, also known as zero-bit watermarking, a watermark, carrying no hidden message, is inserted in a piece of content. The watermark detector checks for the presence of this particular weak signal in received contents. The article looks at this problem from a classical detection theory point of view, but with side information enabled at the embedding side. This means that the watermark signal is a function of the host content. Our study is twofold. The first step is to design the best embedding function for a given detection function, and the best detection function for a given embedding function. This yields two conditions, which are mixed into one 'fundamental' partial differential equation. It appears that many famous watermarking schemes are indeed solution to this 'fundamental' equation. This study thus gives birth to a constructive framework unifying solutions, so far perceived as very different.

Quantization index modulation: a class of provably good methods for digital watermarking and information embedding

IEEE Transactions on Information Theory, 2001

We consider the problem of embedding one signal (e.g., a digital watermark), within another "host" signal to form a third, "composite" signal. The embedding is designed to achieve efficient tradeoffs among the three conflicting goals of maximizing information-embedding rate, minimizing distortion between the host signal and composite signal, and maximizing the robustness of the embedding. We introduce new classes of embedding methods, termed quantization index modulation (QIM) and distortion-compensated QIM (DC-QIM), and develop convenient realizations in the form of what we refer to as dither modulation. Using deterministic models to evaluate digital watermarking methods, we show that QIM is "provably good" against arbitrary bounded and fully informed attacks, which arise in several copyright applications, and in particular, it achieves provably better rate distortion-robustness tradeoffs than currently popular spread-spectrum and low-bit(s) modulation methods. Furthermore, we show that for some important classes of probabilistic models, DC-QIM is optimal (capacity-achieving) and regular QIM is near-optimal. These include both additive white Gaussian noise (AWGN) channels, which may be good models for hybrid transmission applications such as digital audio broadcasting, and mean-square-error-constrained attack channels that model private-key watermarking applications.

Optimal Watermark Detection Under Quantization in the Transform Domain

IEEE Transactions on Circuits and Systems for Video Technology, 2004

The widespread use of digital multimedia data has increased the need for effective means of copyright protection. Watermarking has attracted much attention, as it allows the embedding of a signature in a digital document in an imperceptible manner. In practice, watermarking is subject to various attacks, intentional or unintentional, which degrade the embedded information, rendering it more difficult to detect. A very common, but not malicious, attack is quantization, which is unavoidable for the compression and transmission of digital data. The effect of quantization attacks on the nearly optimal Cauchy watermark detector is examined in this paper. The quantization effects on this detection scheme are examined theoretically, by treating the watermark as a dither signal. The theory of dithered quantizers is used in order to correctly analyze the effect of quantization attacks on the Cauchy watermark detector. The theoretical results are verified by experiments that demonstrate the quantization effects on the detection and error probabilities of the Cauchy detection scheme.

Optimal watermark embedding combining spread spectrum and quantization

EURASIP Journal on Advances in Signal Processing

This paper presents an optimal watermark embedding method combining spread spectrum and quantization. In the method, the host signal vector is quantized to embed a multiple-bit watermark, and meanwhile, the quantized signal is made to locate in the detectable region defined in the context of spread spectrum watermarking. Under the two constraints, the optimal watermarked signal is derived in the sense of minimizing the embedding distortion. The proposed method is further implemented in wavelet transform domain, where the insensitive wavelet coefficients are selected according to the modified human visual model for watermark embedding. Simulations on real images by using the wavelet-based implementations demonstrate the proposed method performs very well in both watermark imperceptibility and robustness and is more robust to typical signal processes, e.g., additive noise, JPEG compression, etc., as compared with the state-of-the-art watermarking methods.

Design and evaluation of sparse quantization index modulation watermarking schemes

Applications of Digital Image Processing XXXI, 2008

In the past decade the use of digital data has increased significantly. The advantages of digital data are, amongst others, easy editing, fast, cheap and cross-platform distribution and compact storage. The most crucial disadvantages are the unauthorized copying and copyright issues, by which authors and license holders can suffer considerable financial losses. Many inexpensive methods are readily available for editing digital data and, unlike analog information, the reproduction in the digital case is simple and robust. Hence, there is great interest in developing technology that helps to protect the integrity of a digital work and the copyrights of its owners. Watermarking, which is the embedding of a signal (known as the watermark) into the original digital data, is one method that has been proposed for the protection of digital media elements such as audio, video and images. In this article, we examine watermarking schemes for still images, based on selective quantization of the coefficients of a wavelet transformed image, i.e. sparse quantization-index modulation (QIM) watermarking. Different grouping schemes for the wavelet coefficients are evaluated and experimentally verified for robustness against several attacks. Wavelet tree-based grouping schemes yield a slightly improved performance over block-based grouping schemes. Additionally, the impact of the deployment of error correction codes on the most promising configurations is examined. The utilization of BCH-codes (Bose, Ray-Chaudhuri, Hocquenghem) results in an improved robustness as long as the capacity of the error codes is not exceeded (cliff-effect).

Provably robust digital watermarking

SPIE Proceedings, 1999

Copyright notification and enforcement, authentication, covert communication, and hybrid transmission are examples of emerging multimedia applications for digital watermarking methods, methods for embedding one signal (e.g., the digital watermark) within another "host" signal to form a third, "composite" signal. The embedding is designed to achieve efficient trade-offs among the three conflicting goals of maximizing information-embedding rate, minimizing distortion between the host signal and composite signal, and maximizing the robustness of the embedding. Quantization index modulation (QIM) methods are a class of watermarking methods that achieve provably good rate-distortion-robustness performance. Indeed, QIM methods exist that achieve performance within a few dB of capacity in the case of a (possibly colored) Gaussian host signal and an additive (possibly colored) Gaussian noise channel. Also, QIM methods can achieve capacity with a type of postprocessing called distortion compensation. This capacity is independent of host signal statistics, and thus, contrary to popular belief, the information-embedding capacity when the host signal is not available at the decoder is the same as the case when the host signal is available at the decoder. A low-complexity realization of QIM called dither modulation has previously been proven to be better than both linear methods of spread spectrum and nonlinear methods of low-bit(s) modulation against square-error distortionconstrained intentional attacks. We introduce a new form of dither modulation called spread-transform dither modulation that retains these favorable performance characteristics while achieving better performance against other attacks such as JPEG compression.

Optimum decoding and detection of multiplicative watermarks

IEEE Transactions on Signal Processing, 2003

This work addresses the problem of optimum decoding and detection of a multibit, multiplicative watermark hosted by Weibull-distributed features: a situation which is classically encountered for image watermarking in the magnitude-of-DFT domain. As such, this work can be seen as an extension of the system described in a previous paper, where the same problem is addressed for the case of 1-bit watermarking. The theoretical analysis is validated through Monte Carlo simulations. Although the structure of the optimum decoder/detector is derived in the absence of attacks, some experimental results are also presented, giving a measure of the overall robustness of the watermark when attacks are present.

Quantization-based watermarking performance improvement using host statistics

Proceedings of the 2004 multimedia and security workshop on Multimedia and security - MM&Sec '04, 2004

In this paper we consider the problem of performance improvement of known-host-state (quantization-based) watermarking methods undergo Additive White Gaussian noise (AWGN) attack. The motivation of our research is twofold. The first reason concerns the common belief that any knowledge about the host image taken into account designing quantization-based watermarking algorithms can not improve their performance. The second reason refers to the poor practical performance of this class of methods at low Watermark-to-Noise Ratio (W N R) regime in comparison to the known-host-statistics techniques when AWGN attack is applied. We demonstrate in this paper that bit error probability of Dither Modulation (DM) and Distortion-Compensated Dither Modulation (DC-DM) against AWGN attack can be significantly reduced when the quantizers are designed using the statistics of the host data. For the case when the statistics of the data correspond to i.i.d. Laplacian distribution and using Uniform Deadzone Quantizer (UDQ) we develop close-form analytical models for the analysis of bit error probability of DM and DC-DM. Results of performed experiments demonstrate that significant performance improvement of classical DM and DC-DM with respect to bit error probability can be achieved with the minor increase of design complexity.

Watermark detectors based on Nth order statistics

Proc. of SPIE, 2002

This paper deals with some detection issues of watermark signals. We propose an easy way to implement an informed watermarking embedder whatever the detection function. This method shows that a linear detection function is not suitable for side information. This is the reason why we build a family of non-linear functions based on nth-order statistics. Used with a side-informed embedder, its performance is much better than the classical direct sequence spread spectrum method.

On the Design of a Watermarking System: Considerations and Rationales

Lecture Notes in Computer Science, 2000

This paper summarizes considerations and rationales for the design of a watermark detector. In particular, we relate watermark detection to the problem of signal detection in the presence of (structured) noise. The paper builds on the mathematical results from several previously published papers (by our own research group or by others) to verify and support our discussion. In an attempt to unify the theoretical analysis of watermarking schemes, we propose several extensions which clarify the interrelations between the various schemes. New results include the matched filter with whitening, where we consider the effect of the image and watermark spectra and imperfect setting of filter coefficients. The paper reflects our practical experience in developing watermarking systems for DVD copy protection and broadcast monitoring. The aim of this paper is to further develop the insight in the performance of watermark detectors, to discuss appropriate models for their analysis and to provide an intuitive rationale for making design choices for a watermark detector.