Introducing Capacity Surface to Estimate Watermarking Capacity (original) (raw)

Estimating Watermarking Capacity in Gray Scale Images Based on Image Complexity

EURASIP Journal on Advances in Signal Processing, 2010

Capacity is one of the most important parameters in image watermarking. Different works have been done on this subject with different assumptions on image and communication channel. However, there is not a global agreement to estimate watermarking capacity. In this paper, we suggest a method to find the capacity of images based on their complexities. We propose a new method to estimate image complexity based on the concept of Region Of Interest (ROI). Our experiments on 2000 images showed that the proposed measure has the best adoption with watermarking capacity in comparison with other complexity measures. In addition, we propose a new method to calculate capacity using proposed image complexity measure. Our proposed capacity estimation method shows better robustness and image quality in comparison with recent works in this field.

Introducing a New Method for Estimation Image Complexity According To Calculate Watermark Capacity

2008 International Conference on Intelligent Information Hiding and Multimedia Signal Processing, 2008

a watermarking algorithm is its capacity. In fact, capacity has a paradoxical relation with other two important parameters: image quality and robustness. Some works have been done on watermarking capacity and a few on image complexities. Most works on watermarking capacity is based on information theory and the capacity values which are calculated based on these methods are very tolerate. In this paper we propose a new method for calculating image complexity based on Region Of Interest (ROI) concept. After that we analyze three complexity measures named: Image compositional complexity (ICC), Quad tree and ROI method, with three different watermarking algorithms that are in Spatial, DCT and Wavelet domain to find a relation between complexity and capacity. Our experiments on 200 images show that the proposed ROI measure has the best adoption with watermarking capacity based on a visual quality degradation calculated by MSSIM . In addition, we found that there is an interesting arbitrary linear relation between watermarking capacity in bit per pixel with image complexity. This will open a new trend in calculating capacity of images based on their content.

A model for the assessment of watermark quality with regard to fidelity

Journal of Visual Communication and Image Representation, 2005

This paper presents a model for assessing watermark quality with regard to fidelity. The model can be applied for assessing the quality of watermarks as well as investigating the effect of attacks on watermarked images. The proposed model is based on image properties as perceived by the human eye and attempts to provide algorithmic measurements that correspond to humanperceived watermark fidelity. Emphasis has been placed on the analysis of the model and the determination of the weights used to derive the final result. Experiments are presented to illustrate the applicability of the model on still images, using examples of watermarked images as well as examples of attacks on watermarked images.

IJERT-Watermarking Robustness Evaluation Using Enhanced Performance Metrics

International Journal of Engineering Research and Technology (IJERT), 2013

https://www.ijert.org/watermarking-robustness-evaluation-using-enhanced-performance-metrics https://www.ijert.org/research/watermarking-robustness-evaluation-using-enhanced-performance-metrics-IJERTV2IS2591.pdf The main idea of this paper is to propose an innovative benchmarking tool to evaluate robustness of any digital image watermarking technique.Image fidelity metrics such as signal to noise ratio(SNR),peak signal to noise ratio(PSNR), weighted peak signal to noise ratio(WPSNR) are being used. Researchers in the field of image processing use MSE (Mean Square Error) based fidelity metrics to validate their research results.However,when large quantities of data are to be assessed,subjective metrics such as mean opinion score(MOS),signal to noise ratio(SNR),peak signal to noise ratio(PSNR) are not pragmatic since it needs experts and inordinate amount of time.PSNR and WPSNR are independent of human visual system(HVS) parameters and hence they are inappropiriate scales to measure potential research results.This brings out a new image fidelity metric called Enhanced Weighted peak signal to noise ratio(EWPSNR) which is experimentally proven to be better than PSNR and WPSNR.

Power: A Metric for Evaluating Watermarking Algorithms

An important parameter in evaluating data hiding methods is hiding capacity [3] [11] [12], [16] i e the amount of data that a certain algorithm can "hide" until reaching allowable distortion limits One fundamental difference between watermarking [4] [5] [7] [8] [9] [13] [14] [17] [18] [19] [20] [21] [22] [23] and generic data hiding resides exactly in the main applicability and descriptions of the two domains Data hiding aims at enabling Alice and Bob

Hierarchical watermarking framework based on analysis of local complexity variations

Multimedia Tools and Applications

Increasing production and exchange of multimedia content has increased the need for better protection of copyright by means of watermarking. Different methods have been proposed to satisfy the tradeoff between imperceptibility and robustness as two important characteristics in watermarking while maintaining proper data-embedding capacity. Many watermarking methods use image independent set of parameters. Different images possess different potentials for robust and transparent hosting of watermark data. To overcome this deficiency, in this paper we have proposed a new hierarchical adaptive watermarking framework. At the higher level of hierarchy, complexity of an image is ranked in comparison with complexities of images of a dataset. For a typical dataset of images, the statistical distribution of block complexities is found. At the lower level of the hierarchy, for a single cover image that is to be watermarked, complexities of blocks can be found. Local complexity variation (LCV) among a block and its neighbors is used to adaptively control the watermark strength factor of each block. Such local complexity analysis creates an adaptive embedding scheme, which results in higher transparency by reducing blockiness effects. This two level hierarchy has enabled our method to take advantage of all image blocks to elevate the embedding capacity while preserving imperceptibility. For testing the effectiveness of the proposed framework, contourlet transform (CT) in conjunction with discrete cosine transform (DCT) is used to embed pseudorandom binary sequences as watermark. Experimental results show that the proposed framework elevates the performance the watermarking routine in terms of both robustness and transparency.

Achieving Higher Stability in Watermarking According to Image Complexity

One of the main objectives of all watermarking algorithms is to provide a secure method for detecting all o r part of the watermark pattern in case of the usual attacks on a watermarked image. In this paper, a method is introduced that is suitable for any spatial domain watermarking algorithm, so that it can provide a measure for the level of robustness when a given watermark is supposed to be embedded in a known host image. In order to increase the robustness of the watermarked image, for a watermark of M bits, it was embedded N = s M times, where s is a small integer. Doing this, the entire image is divided into 16 equal size blocks. For each block, the complexity of the sub-image in that block is measured. The amount of repetition of the watermark bits saved in each block is determined, according to the complexity level of that block. The complexity of a sub-image is measured using its quad tree representation. This approach not only secures the watermarked image with respect to ...

Embedding capacity estimation of reversible watermarking schemes

Sadhana, 2014

Estimation of the embedding capacity is an important problem specifically in reversible multi-pass watermarking and is required for analysis before any image can be watermarked. In this paper, we propose an efficient method for estimating the embedding capacity of a given cover image under multi-pass embedding, without actually embedding the watermark. We demonstrate this for a class of reversible watermarking schemes which operate on a disjoint group of pixels, specifically for pixel pairs. The proposed algorithm iteratively updates the co-occurrence matrix at every stage to estimate the multi-pass embedding capacity, and is much more efficient visa -vis actual watermarking. We also suggest an extremely efficient, pre-computable tree based implementation which is conceptually similar to the cooccurrence based method, but provides the estimates in a single iteration, requiring a complexity akin to that of single pass capacity estimation. We also provide upper bounds on the embedding capacity. We finally evaluate performance of our algorithms on recent watermarking algorithms.

Estimation of the Embedding Capacity in Pixel-pair based Watermarking Schemes

arXiv preprint arXiv:1111.5653, 2011

Abstract: Estimation of the Embedding capacity is an important problem specifically in reversible multi-pass watermarking and is required for analysis before any image can be watermarked. In this paper, we propose an efficient method for estimating the embedding capacity of a given cover image under multi-pass embedding, without actually embedding the watermark. We demonstrate this for a class of reversible watermarking schemes which operate on a disjoint group of pixels, specifically for pixel pairs. The proposed algorithm ...

Performance measures for image watermarking schemes

1999

In this paper we propose performance measures and provide a description of the application of digital watermarking for use in copyright protection of digital images. The watermarking process we consider involves embedding visually imperceptible data in a digital image in such a way that it is difficult to detect or remove, unless one possesses specific secret information. The major processes involved in the watermarking processes are considered, including insertion, attack, and detection/extraction. Generic features of these processes are discussed, along with open issues, and areas where experimentation is likely to prove useful.