Sviatoslav Voloshynovskiy - Academia.edu (original) (raw)
Papers by Sviatoslav Voloshynovskiy
International Journal of Image and Graphics, 2005
In this paper we introduce and develop a framework for visual data-hiding technologies that aim a... more In this paper we introduce and develop a framework for visual data-hiding technologies that aim at resolving emerging problems of modern multimedia networking. First, we introduce the main open issues of public network security, quality of services control and secure communications. Secondly, we formulate digital data-hiding into visual content as communications with side information and advocate an appropriate information-theoretic framework for the analysis of different data-hiding methods in various applications. In particular,…
In this paper, we analyze the reversibility of data hiding techniques based on ran- dom binning a... more In this paper, we analyze the reversibility of data hiding techniques based on ran- dom binning as a by-product of pure message communications. We demonstrate the capabilities of unauthorized users to perform hidden data removal using solely a sig- nal processing approach based on optimal estimation as well as consider reversibility on the side of authorized users who have knowledge
Media Forensics and Security II, 2010
In this paper, we consider a low complexity identification system for highly distorted images. Th... more In this paper, we consider a low complexity identification system for highly distorted images. The performance of the proposed identification system is analyzed based on the average probability of error. An expected improvement of the performance is obtained combining random projection transform and concept of bit reliability. Simulations based on synthetic and real data confirm the efficiency of the proposed approach.
Proceedings 2001 International Conference on Image Processing (Cat. No.01CH37205), 2001
This paper presents an efficient method for the estimation and recovering from nonlinear or local... more This paper presents an efficient method for the estimation and recovering from nonlinear or local geometrical distortions, such as the random bending attack and restricted projective transforms. The distortions are modeled as a set of local affine transforms, the watermark being repeatedly allocated into small blocks in order to ensure its locality. The estimation of the affine transform parameters is
Security and Watermarking of Multimedia Contents II, 2000
Digital image watermarking has become a popular technique for authentication and copyright protec... more Digital image watermarking has become a popular technique for authentication and copyright protection. For verifying the security and robustness of watermarking algorithms, specific attacks have to be applied to test them. In contrast to the known Stirmark attack, which degrades the quality of the image while destroying the watermark, this paper presents a new approach which is based on the estimation of a watermark and the exploitation of the properties of Human Visual System (HVS). The new attack satisfies two important requirements. First, image quality after the attack as perceived by the HVS is not worse than the quality of the stego image. Secondly, the attack uses all available prior information about the watermark and cover image statistics to perform the best watermark removal or damage. The proposed attack is based on a stochastic formulation of the watermark removal problem, considering the embedded watermark as additive noise with some probability distribution. The attack scheme consists of two main stages: a) watermark estimation and partial removal by a filtering based on a Maximum a Posteriori (MAP) approach; b) watermark alteration and hiding through addition of noise to the filtered image, taking into account the statistics of the embedded watermark and exploiting HVS characteristics. Experiments on a number of real world and computer generated images show the high efficiency of the proposed attack against known academic and commercial methods: the watermark is completely destroyed in all tested images without altering the image quality. The approach can be used against watermark embedding schemes that operate either in coordinate domain, or transform domains like Fourier, DCT or wavelet.
Proceeding of the 8th workshop on Multimedia and security - MM&Sec '06, 2006
In this paper we consider the problem of performance improvement of non-blind statistical stegana... more In this paper we consider the problem of performance improvement of non-blind statistical steganalysis of additive steganography in real images. The proposed approach differs from the existing solutions in two main aspects: (a) a locally non-stationary Gaussian model is introduced via source splitting to represent the statistics of the cover image and (b) the detection of the hidden information is performed not from all but from those channels that allow to perform it with the required accuracy. We analyze the theoretically attainable bounds in such a framework and compare them to the corresponding limits of the existing state-of-the-art frameworks. The performed analysis demonstrates the superiority of the proposed approach.
Storage and Retrieval Methods and Applications for Multimedia 2004, 2003
In this paper a novel "Smart Media" concept for semantic-based multimedia security and management... more In this paper a novel "Smart Media" concept for semantic-based multimedia security and management is proposed. This concept is based on interactive object segmentation (considered as side information in visual human-computer interface) with hidden object-based annotations. Information-theoretic formalism is introduced that considers the human-computer interface as a multiple access channel. We do not consider an image as a set of pixels but rather as a set of annotated regions that correspond to objects or their parts, where these objects are associated with some hidden descriptive text about their features. The presented approach for "semantic" segmentation is addressed by means of the human-computer interface that allows a user to easily incorporate information related to image objects and to store them in a secure way. Since each selected image object carries its own embedded description, this makes it self-containing and formally independent from the particular image format used for storage in image databases. The proposed object-based hidden descriptors are invariant to changes of image filename or/and image headers, and are resistant to object cropping/insertion operations, which are usual in multimedia processing and management. This is well harmonized with the "Smart Media" concept where the image contains additional information about itself, and where this information is securely integrated inside the image while remaining perceptually invisible.
Lecture Notes in Computer Science, 2001
... based on Watson's metric [25] was pro-posed for determining visual quality of a ... more ... based on Watson's metric [25] was pro-posed for determining visual quality of a watermark image ... This paper falls within the scope of the current Certimark European project whose central aim is ... The aim of this benchmark is not to invalidate the benchmark al-ready proposed by ...
Lecture Notes in Computer Science, 2005
The main goal of this tutorial is to review the theory and design the worst case additive attack ... more The main goal of this tutorial is to review the theory and design the worst case additive attack (WCAA) for |M|-ary quantizationbased data-hiding methods using as performance criteria the error probability and the maximum achievable rate of reliable communications. Our analysis focuses on the practical scheme known as distortion compensation dither modulation (DC-DM). From the mathematical point of view, the problem of the worst case attack (WCA) design using probability of error as a cost function is formulated as the maximization of the average probability of error subject to the introduced distortion for a given decoding rule. When mutual information is selected as a cost function, a solution to the minimization problem should provide such an attacking noise probability density function (pdf) that will maximally decrease the rate of reliable communications for an arbitrary decoder structure. The obtained results demonstrate that, within the class of additive attacks, the developed attack leads to a stronger performance decrease for the considered class of embedding techniques than the additive white Gaussian or uniform noise attacks.
Lecture Notes in Computer Science, 2008
The main goal of this study consists in the development of the worst case additive attack (WCAA) ... more The main goal of this study consists in the development of the worst case additive attack (WCAA) for |M|-ary quantization-based datahiding methods using as design criteria the error probability and the maximum achievable rate of reliable communications. Our analysis focuses on the practical scheme known as distortion-compensated dither modulation (DC-DM). From the mathematical point of view, the problem of the worst case attack (WCA) design using probability of error as a cost function is formulated as the maximization of the average probability of error subject to the introduced distortion for a given decoding rule. When mutual information is selected as a cost function, a solution of the minimization problem should provide such an attacking noise probability density function (pdf) that will maximally decrease the rate of reliable communications for an arbitrary decoder structure. The obtained results demonstrate that, within the class of additive attacks, the developed attack leads to a stronger performance decrease for the considered class of embedding techniques than the additive white Gaussian or uniform noise attacks.
Visual Communications and Image Processing 2004, 2004
In the paper we advocate image compression technique in the scope of distributed source coding fr... more In the paper we advocate image compression technique in the scope of distributed source coding framework. The novelty of the proposed approach is twofold: classical image compression is considered from the positions of source coding with side information and, contrarily to the existing scenarios, where side information is given explicitly, side information is created based on deterministic approximation of local image features. We consider an image in the transform domain as a realization of a source with a bounded codebook of symbols where each symbol represents a particular edge shape. The codebook is image independent and plays the role of auxiliary source. Due to the partial availability of side information at both encoder and decoder we treat our problem as a modification of Berger-Flynn-Gray problem and investigate a possible gain over the solutions when side information is either unavailable or available only at decoder. Finally, we present a practical compression algorithm for passport photo images based on our concept that demonstrates the superior performance in very low bit rate regime.
Proceeding of the 8th workshop on Multimedia and security - MM&Sec '06, 2006
Reversibility of data-hiding refers to the reconstruction of original host data at the decoder fr... more Reversibility of data-hiding refers to the reconstruction of original host data at the decoder from the stego data. Previous works on the subject are concentrated on the reversibility of data-hiding techniques from multimedia perspectives. However, from the security point of view, that at our knowledge was not exploited in existing studies, reversibility could be used by an attacker to remove the complete trace of watermark data from the stego data in the sense of designing the worst case attack. Thus, the aim of this paper is to analyze the reversibility of data-hiding techniques based on random binning from the security perspectives.
IEEE 6th Workshop on Multimedia Signal Processing, 2004., 2004
In the scope of quantization-based watermarking techniques and additive attacks, there exists a c... more In the scope of quantization-based watermarking techniques and additive attacks, there exists a common belief that the worst case attack (WCA) is given by additive white Gaussian noise (AWGN). Nevertheless, it has not been proved that the AWGN is indeed the WCA within the class of additive attacks against quantization-based watermarking. In this paper, the analysis of the WCA is theoretically developed with probability of error as a cost function. The adopted approach includes the possibility of masking the attack by a target probability density function (PDF) in order to trick smart decoding. The developed attack upper bounds the probability of error for quantizationbased embedding schemes within the class of additive attacks.
IEEE 6th Workshop on Multimedia Signal Processing, 2004., 2004
By abandoning the assumption of an infinite document to watermark ratio, we recompute the achieva... more By abandoning the assumption of an infinite document to watermark ratio, we recompute the achievable rates for Eggers's Scalar Costa Scheme (SCS, also known as Scalar Distortion Compensated Dither Modulation) and show, as opposed to the results reported by Eggers, that the achievable rates of SCS are always larger than those of spread spectrum (SS). Moreover, we show that for small Watermark to Noise Ratios, SCS becomes equivalent to a two-centroid problem, thus revealing interesting relations with SS and with Malvar's Improved Spread Spectrum (ISS). We also show an interesting behavior for the optimal distortion compensation parameter. All these results aim at filling an existing gap in watermarking theory and have important consequences for the design of efficient decoders for data hiding problems.
Security, Steganography, and Watermarking of Multimedia Contents VIII, 2006
In this paper we consider the problem of document authentication in electronic and printed forms.... more In this paper we consider the problem of document authentication in electronic and printed forms. We formulate this problem from the information-theoretic perspectives and present the joint source-channel coding theorems showing the performance limits in such protocols. We analyze the security of document authentication methods and present the optimal attacking strategies with corresponding complexity estimates that, contrarily to the existing studies, crucially rely on the information leaked by the authentication protocol. Finally, we present the results of experimental validation of the developed concept that justifies the practical efficiency of the elaborated framework.
Security, Steganography, and Watermarking of Multimedia Contents VIII, 2006
In this paper, we propose a new theoretical framework for the data-hiding problem of digital and ... more In this paper, we propose a new theoretical framework for the data-hiding problem of digital and printed text documents. We explain how this problem can be seen as an instance of the well-known Gel'fand-Pinsker problem. The main idea for this interpretation is to consider a text character as a data structure consisting of multiple quantifiable features such as shape, position, orientation, size, color, etc. We also introduce color quantization, a new semi-fragile text data-hiding method that is fully automatable, has high information embedding rate, and can be applied to both digital and printed text documents. The main idea of this method is to quantize the color or luminance intensity of each character in such a manner that the human visual system is not able to distinguish between the original and quantized characters, but it can be easily performed by a specialized reader machine. We also describe halftone quantization, a related method that applies mainly to printed text documents. Since these methods may not be completely robust to printing and scanning, an outer coding layer is proposed to solve this issue. Finally, we describe a practical implementation of the color quantization method and present experimental results for comparison with other existing methods. * The concept of strong typicality is nicely introduced in the book by Cover and Thomas 8 † Usually, in the context of data-hiding, a secret key K is shared between both encoder and decoder. The secret key K is used for security purposes. For the sake of completeness explicitly shows the secret key K. However, since in this paper we do not perform a security analysis, we will not refer to it.
Lecture Notes in Computer Science, 2000
Transfer modulation function of HVS J. Ruanaidh, T. Pun (Signal Processing, No3, 1998) -CUI Lumin... more Transfer modulation function of HVS J. Ruanaidh, T. Pun (Signal Processing, No3, 1998) -CUI Luminance sensitivity M. Kutter (Proc. SPIE, 1998) -EPFL Luminance and texture masking -L&T M. Kankanhalli, R. Ramakrishman (ACM Multimedia, 1998) F. Bartolini, M. Barni, V. Cappelini, A. Piva (ICIP, 1998) use results of perceptual image compression (N. Jayant, J. Johnton, R. Safranek, Proc. IEEE, 1993)
Lecture Notes in Computer Science, 2005
In this work, we consider the text data-hiding problem as a particular instance of the well-known... more In this work, we consider the text data-hiding problem as a particular instance of the well-known Gel'fand-Pinsker problem . The text, where some message m ∈ M is to be hidden, is represented by x and called cover text. Each component x i , i = 1, 2, . . . , N , of x represents one character from this text. Here, we define a character as an element from a given language alphabet (e.g. the latin alphabet {A, B, . . . , Z}). To be more precise, we conceive each character x i as a data structure consisting of multiple component fields (features): name, shape, position, orientation, size, color, etc.
International Journal of Image and Graphics, 2005
In this paper we introduce and develop a framework for visual data-hiding technologies that aim a... more In this paper we introduce and develop a framework for visual data-hiding technologies that aim at resolving emerging problems of modern multimedia networking. First, we introduce the main open issues of public network security, quality of services control and secure communications. Secondly, we formulate digital data-hiding into visual content as communications with side information and advocate an appropriate information-theoretic framework for the analysis of different data-hiding methods in various applications. In particular,…
In this paper, we analyze the reversibility of data hiding techniques based on ran- dom binning a... more In this paper, we analyze the reversibility of data hiding techniques based on ran- dom binning as a by-product of pure message communications. We demonstrate the capabilities of unauthorized users to perform hidden data removal using solely a sig- nal processing approach based on optimal estimation as well as consider reversibility on the side of authorized users who have knowledge
Media Forensics and Security II, 2010
In this paper, we consider a low complexity identification system for highly distorted images. Th... more In this paper, we consider a low complexity identification system for highly distorted images. The performance of the proposed identification system is analyzed based on the average probability of error. An expected improvement of the performance is obtained combining random projection transform and concept of bit reliability. Simulations based on synthetic and real data confirm the efficiency of the proposed approach.
Proceedings 2001 International Conference on Image Processing (Cat. No.01CH37205), 2001
This paper presents an efficient method for the estimation and recovering from nonlinear or local... more This paper presents an efficient method for the estimation and recovering from nonlinear or local geometrical distortions, such as the random bending attack and restricted projective transforms. The distortions are modeled as a set of local affine transforms, the watermark being repeatedly allocated into small blocks in order to ensure its locality. The estimation of the affine transform parameters is
Security and Watermarking of Multimedia Contents II, 2000
Digital image watermarking has become a popular technique for authentication and copyright protec... more Digital image watermarking has become a popular technique for authentication and copyright protection. For verifying the security and robustness of watermarking algorithms, specific attacks have to be applied to test them. In contrast to the known Stirmark attack, which degrades the quality of the image while destroying the watermark, this paper presents a new approach which is based on the estimation of a watermark and the exploitation of the properties of Human Visual System (HVS). The new attack satisfies two important requirements. First, image quality after the attack as perceived by the HVS is not worse than the quality of the stego image. Secondly, the attack uses all available prior information about the watermark and cover image statistics to perform the best watermark removal or damage. The proposed attack is based on a stochastic formulation of the watermark removal problem, considering the embedded watermark as additive noise with some probability distribution. The attack scheme consists of two main stages: a) watermark estimation and partial removal by a filtering based on a Maximum a Posteriori (MAP) approach; b) watermark alteration and hiding through addition of noise to the filtered image, taking into account the statistics of the embedded watermark and exploiting HVS characteristics. Experiments on a number of real world and computer generated images show the high efficiency of the proposed attack against known academic and commercial methods: the watermark is completely destroyed in all tested images without altering the image quality. The approach can be used against watermark embedding schemes that operate either in coordinate domain, or transform domains like Fourier, DCT or wavelet.
Proceeding of the 8th workshop on Multimedia and security - MM&Sec '06, 2006
In this paper we consider the problem of performance improvement of non-blind statistical stegana... more In this paper we consider the problem of performance improvement of non-blind statistical steganalysis of additive steganography in real images. The proposed approach differs from the existing solutions in two main aspects: (a) a locally non-stationary Gaussian model is introduced via source splitting to represent the statistics of the cover image and (b) the detection of the hidden information is performed not from all but from those channels that allow to perform it with the required accuracy. We analyze the theoretically attainable bounds in such a framework and compare them to the corresponding limits of the existing state-of-the-art frameworks. The performed analysis demonstrates the superiority of the proposed approach.
Storage and Retrieval Methods and Applications for Multimedia 2004, 2003
In this paper a novel "Smart Media" concept for semantic-based multimedia security and management... more In this paper a novel "Smart Media" concept for semantic-based multimedia security and management is proposed. This concept is based on interactive object segmentation (considered as side information in visual human-computer interface) with hidden object-based annotations. Information-theoretic formalism is introduced that considers the human-computer interface as a multiple access channel. We do not consider an image as a set of pixels but rather as a set of annotated regions that correspond to objects or their parts, where these objects are associated with some hidden descriptive text about their features. The presented approach for "semantic" segmentation is addressed by means of the human-computer interface that allows a user to easily incorporate information related to image objects and to store them in a secure way. Since each selected image object carries its own embedded description, this makes it self-containing and formally independent from the particular image format used for storage in image databases. The proposed object-based hidden descriptors are invariant to changes of image filename or/and image headers, and are resistant to object cropping/insertion operations, which are usual in multimedia processing and management. This is well harmonized with the "Smart Media" concept where the image contains additional information about itself, and where this information is securely integrated inside the image while remaining perceptually invisible.
Lecture Notes in Computer Science, 2001
... based on Watson's metric [25] was pro-posed for determining visual quality of a ... more ... based on Watson's metric [25] was pro-posed for determining visual quality of a watermark image ... This paper falls within the scope of the current Certimark European project whose central aim is ... The aim of this benchmark is not to invalidate the benchmark al-ready proposed by ...
Lecture Notes in Computer Science, 2005
The main goal of this tutorial is to review the theory and design the worst case additive attack ... more The main goal of this tutorial is to review the theory and design the worst case additive attack (WCAA) for |M|-ary quantizationbased data-hiding methods using as performance criteria the error probability and the maximum achievable rate of reliable communications. Our analysis focuses on the practical scheme known as distortion compensation dither modulation (DC-DM). From the mathematical point of view, the problem of the worst case attack (WCA) design using probability of error as a cost function is formulated as the maximization of the average probability of error subject to the introduced distortion for a given decoding rule. When mutual information is selected as a cost function, a solution to the minimization problem should provide such an attacking noise probability density function (pdf) that will maximally decrease the rate of reliable communications for an arbitrary decoder structure. The obtained results demonstrate that, within the class of additive attacks, the developed attack leads to a stronger performance decrease for the considered class of embedding techniques than the additive white Gaussian or uniform noise attacks.
Lecture Notes in Computer Science, 2008
The main goal of this study consists in the development of the worst case additive attack (WCAA) ... more The main goal of this study consists in the development of the worst case additive attack (WCAA) for |M|-ary quantization-based datahiding methods using as design criteria the error probability and the maximum achievable rate of reliable communications. Our analysis focuses on the practical scheme known as distortion-compensated dither modulation (DC-DM). From the mathematical point of view, the problem of the worst case attack (WCA) design using probability of error as a cost function is formulated as the maximization of the average probability of error subject to the introduced distortion for a given decoding rule. When mutual information is selected as a cost function, a solution of the minimization problem should provide such an attacking noise probability density function (pdf) that will maximally decrease the rate of reliable communications for an arbitrary decoder structure. The obtained results demonstrate that, within the class of additive attacks, the developed attack leads to a stronger performance decrease for the considered class of embedding techniques than the additive white Gaussian or uniform noise attacks.
Visual Communications and Image Processing 2004, 2004
In the paper we advocate image compression technique in the scope of distributed source coding fr... more In the paper we advocate image compression technique in the scope of distributed source coding framework. The novelty of the proposed approach is twofold: classical image compression is considered from the positions of source coding with side information and, contrarily to the existing scenarios, where side information is given explicitly, side information is created based on deterministic approximation of local image features. We consider an image in the transform domain as a realization of a source with a bounded codebook of symbols where each symbol represents a particular edge shape. The codebook is image independent and plays the role of auxiliary source. Due to the partial availability of side information at both encoder and decoder we treat our problem as a modification of Berger-Flynn-Gray problem and investigate a possible gain over the solutions when side information is either unavailable or available only at decoder. Finally, we present a practical compression algorithm for passport photo images based on our concept that demonstrates the superior performance in very low bit rate regime.
Proceeding of the 8th workshop on Multimedia and security - MM&Sec '06, 2006
Reversibility of data-hiding refers to the reconstruction of original host data at the decoder fr... more Reversibility of data-hiding refers to the reconstruction of original host data at the decoder from the stego data. Previous works on the subject are concentrated on the reversibility of data-hiding techniques from multimedia perspectives. However, from the security point of view, that at our knowledge was not exploited in existing studies, reversibility could be used by an attacker to remove the complete trace of watermark data from the stego data in the sense of designing the worst case attack. Thus, the aim of this paper is to analyze the reversibility of data-hiding techniques based on random binning from the security perspectives.
IEEE 6th Workshop on Multimedia Signal Processing, 2004., 2004
In the scope of quantization-based watermarking techniques and additive attacks, there exists a c... more In the scope of quantization-based watermarking techniques and additive attacks, there exists a common belief that the worst case attack (WCA) is given by additive white Gaussian noise (AWGN). Nevertheless, it has not been proved that the AWGN is indeed the WCA within the class of additive attacks against quantization-based watermarking. In this paper, the analysis of the WCA is theoretically developed with probability of error as a cost function. The adopted approach includes the possibility of masking the attack by a target probability density function (PDF) in order to trick smart decoding. The developed attack upper bounds the probability of error for quantizationbased embedding schemes within the class of additive attacks.
IEEE 6th Workshop on Multimedia Signal Processing, 2004., 2004
By abandoning the assumption of an infinite document to watermark ratio, we recompute the achieva... more By abandoning the assumption of an infinite document to watermark ratio, we recompute the achievable rates for Eggers's Scalar Costa Scheme (SCS, also known as Scalar Distortion Compensated Dither Modulation) and show, as opposed to the results reported by Eggers, that the achievable rates of SCS are always larger than those of spread spectrum (SS). Moreover, we show that for small Watermark to Noise Ratios, SCS becomes equivalent to a two-centroid problem, thus revealing interesting relations with SS and with Malvar's Improved Spread Spectrum (ISS). We also show an interesting behavior for the optimal distortion compensation parameter. All these results aim at filling an existing gap in watermarking theory and have important consequences for the design of efficient decoders for data hiding problems.
Security, Steganography, and Watermarking of Multimedia Contents VIII, 2006
In this paper we consider the problem of document authentication in electronic and printed forms.... more In this paper we consider the problem of document authentication in electronic and printed forms. We formulate this problem from the information-theoretic perspectives and present the joint source-channel coding theorems showing the performance limits in such protocols. We analyze the security of document authentication methods and present the optimal attacking strategies with corresponding complexity estimates that, contrarily to the existing studies, crucially rely on the information leaked by the authentication protocol. Finally, we present the results of experimental validation of the developed concept that justifies the practical efficiency of the elaborated framework.
Security, Steganography, and Watermarking of Multimedia Contents VIII, 2006
In this paper, we propose a new theoretical framework for the data-hiding problem of digital and ... more In this paper, we propose a new theoretical framework for the data-hiding problem of digital and printed text documents. We explain how this problem can be seen as an instance of the well-known Gel'fand-Pinsker problem. The main idea for this interpretation is to consider a text character as a data structure consisting of multiple quantifiable features such as shape, position, orientation, size, color, etc. We also introduce color quantization, a new semi-fragile text data-hiding method that is fully automatable, has high information embedding rate, and can be applied to both digital and printed text documents. The main idea of this method is to quantize the color or luminance intensity of each character in such a manner that the human visual system is not able to distinguish between the original and quantized characters, but it can be easily performed by a specialized reader machine. We also describe halftone quantization, a related method that applies mainly to printed text documents. Since these methods may not be completely robust to printing and scanning, an outer coding layer is proposed to solve this issue. Finally, we describe a practical implementation of the color quantization method and present experimental results for comparison with other existing methods. * The concept of strong typicality is nicely introduced in the book by Cover and Thomas 8 † Usually, in the context of data-hiding, a secret key K is shared between both encoder and decoder. The secret key K is used for security purposes. For the sake of completeness explicitly shows the secret key K. However, since in this paper we do not perform a security analysis, we will not refer to it.
Lecture Notes in Computer Science, 2000
Transfer modulation function of HVS J. Ruanaidh, T. Pun (Signal Processing, No3, 1998) -CUI Lumin... more Transfer modulation function of HVS J. Ruanaidh, T. Pun (Signal Processing, No3, 1998) -CUI Luminance sensitivity M. Kutter (Proc. SPIE, 1998) -EPFL Luminance and texture masking -L&T M. Kankanhalli, R. Ramakrishman (ACM Multimedia, 1998) F. Bartolini, M. Barni, V. Cappelini, A. Piva (ICIP, 1998) use results of perceptual image compression (N. Jayant, J. Johnton, R. Safranek, Proc. IEEE, 1993)
Lecture Notes in Computer Science, 2005
In this work, we consider the text data-hiding problem as a particular instance of the well-known... more In this work, we consider the text data-hiding problem as a particular instance of the well-known Gel'fand-Pinsker problem . The text, where some message m ∈ M is to be hidden, is represented by x and called cover text. Each component x i , i = 1, 2, . . . , N , of x represents one character from this text. Here, we define a character as an element from a given language alphabet (e.g. the latin alphabet {A, B, . . . , Z}). To be more precise, we conceive each character x i as a data structure consisting of multiple component fields (features): name, shape, position, orientation, size, color, etc.