Colour Constancy for Image of Non-Uniformly Lit Scenes (original) (raw)
Related papers
Color Constancy Algorithm for Mixed-Illuminant Scene Images
IEEE Access
The intrinsic properties of the ambient illuminant significantly alter the true colors of objects within an image. Most existing color constancy algorithms assume a uniformly lit scene across the image. The performance of these algorithms deteriorates considerably in the presence of mixed illuminants. Hence, a potential solution to this problem is the consideration of a combination of image regional color constancy weighing factors (CCWFs) in determining the CCWF for each pixel. This paper presents a color constancy algorithm for mixed-illuminant scene images. The proposed algorithm splits the input image into multiple segments and uses the normalized average absolute difference of each segment as a measure for determining whether the segment's pixels contain reliable color constancy information. The Max-RGB principle is then used to calculate the initial weighting factors for each selected segment. The CCWF for each image pixel is then calculated by combining the weighting factors of the selected segments, which are adjusted by the normalized Euclidian distances of the pixel from the centers of the selected segments. Experimental results on images from five benchmark data sets show that the proposed algorithm subjectively outperforms the state-of-the-art techniques, while its objective performance is comparable with those of the state-of-the-art techniques.
Evaluation of Color Constancy Algorithms
This paper presents a review on various color constancy (CC) techniques. The CC is a method that gathers the effect of various light sources on a digital image. The scene recorded by a camera relies on three issues: the physical information of the object, the illumination incident on the scene, and the characteristics of the camera. The objective of CC is to account for the effect of the illuminate. Many existing methods such as Grey-world method, Physics based CC and Edge-based methods were used to measure the CC of objects affected by different light source. All these methods have obvious limitations that the light source across the scene is spectrally uniform. This assumption is often violated as there might be more than one light source illuminating the scene. The overall objective of this paper is to find the gaps in earlier work on CC.
Improved Color Constancy Algorithms Using Fuzzy Technique
2019
Color constancy is the ability to recognize the color of objects independent of the light source luminance. Color processing and especially color constancy performs an important role in computer vision and image processing applications such as image retrieval, image classification, color object recognition, and object tracking. Color constancy is usually understood as the task of finding descriptors which are invariant to illumination changes in surfaces of a scene while correcting colors in an image is considered as a different phase. This paper proposes a new combinational method based on fuzzy method and clustering to estimate the chromaticity of the light source as the major step of color constancy. In this algorithm, after fuzzification different features of the image by using a clustering algorithm estimates the light source illuminant. To verify the proposed method, four well-known algorithms were selected based on the best algorithm by the proposed approach. In selecting these methods, it was tried to choose the ones which had better performance in comparison to other methods. It is shown in this article that the proposed approach performs better than other proposed methods for color constancy most of the time.
A Novel Colour-Constancy Algorithm: A Mixture of Existing Algorithms
2012
Colour constancy algorithms attempt to provide an accurate colour representation of images independent of the illuminant colour used for scene illumination. In this paper we investigate well-known and state-ofthe-art colour-constancy algorithms. We then select a few of these algorithms and combine them using a weighted-sum approach. Four methods are involved in the weights estimation. The first method uniformly distributes the weights among the algorithms. The second one uses learning set of images to train the weights based on errors. The third method searches for a linear combination of all methods' outcomes that minimise the error. The fourth one trains a continuous perceptron, in order to find optimum combination of the methods. In all four approaches, we used a set of 60 images. Each of these images was taken with a Gretag Macbeth colour checker card in the scene, in order to make quantitative evaluation of colour-consistency algorithms. The results obtained show our proposed method outperforms individual algorithms. The best results were obtained using the weights for linear combination and the trained continuous perceptron to combine the algorithms.
Color Constancy for Uniform and Non-Uniform Illuminant Using Image Texture
IEEE Access
Color constancy is the capability to observe the true color of a scene from its image regardless of the scene's illuminant. It is a significant part of the digital image processing pipeline and is utilized when the true color of an object is required. Most existing color constancy methods assume a uniform illuminant across the whole scene of the image, which is not always the case. Hence, their performances are influenced by the presence of multiple light sources. This paper presents a color constancy adjustment technique that uses the texture of the image pixels to select pixels with sufficient color variation to be used for image color correction. The proposed technique applies a histogram-based algorithm to determine the appropriate number of segments to efficiently split the image into its key color variation areas. The K-means ++ algorithm is then used to divide the input image into the predetermined number of segments. The proposed algorithm identifies pixels with sufficient color variation in each segment using the entropies of the pixels, which represent the segment's texture. Then, the algorithm calculates the initial color constancy adjustment factors for each segment by applying an existing statistics-based color constancy algorithm on the selected pixels. Finally, the proposed method computes color adjustment factors per pixel within the image by fusing the initial color adjustment factors of all segments, which are regulated by the Euclidian distances of each pixel from the centers of gravity of the segments. The experimental results on benchmark single-and multiple-illuminant image datasets show that the images that are obtained using the proposed algorithm have significantly higher subjective and very competitive objective qualities compared to those that are obtained with the state-of-the-art techniques.
A comparison of computational color constancy Algorithms. II. Experiments with image data
IEEE Transactions on Image Processing, 2002
We introduce a context for testing computational color constancy, specify our approach to the implementation of a number of the leading algorithms, and report the results of three experiments using synthesized data. Experiments using synthesized data are important because the ground truth is known, possible confounds due to camera characterization and pre-processing are absent, and various factors affecting color constancy can be efficiently investigated because they can be manipulated individually and precisely.
Automatic color constancy algorithm selection and combination
Pattern Recognition, 2010
In this work, we investigate how illuminant estimation techniques can be improved taking into account intrinsic, low level properties of the images. We show how these properties can be used to drive, given a set of illuminant estimation algorithms, the selection of the best algorithm for a given image. The algorithm selection is made by a decision forest composed of several trees on the basis of the values of a set of heterogeneous features. The features represent the image content in terms of low-level visual properties. The trees are trained to select the algorithm that minimizes the expected error in illuminant estimation. We also designed a combination strategy that estimates the illuminant as a weighted sum of the different algorithms' estimations. Experimental results on the widely used Ciurea and Funt dataset demonstrate the effectiveness of our approach.
A Perceptual Comparison of Distance Measures for Color Constancy Algorithms
Lecture Notes in Computer Science, 2008
Color constancy is the ability to measure image features independent of the color of the scene illuminant and is an important topic in color and computer vision. As many color constancy algorithms exist, different distance measures are used to compute their accuracy. In general, these distances measures are based on mathematical principles such as the angular error and Euclidean distance. However, it is unknown to what extent these distance measures correlate to human vision. Therefore, in this paper, a taxonomy of different distance measures for color constancy algorithms is presented. The main goal is to analyze the correlation between the observed quality of the output images and the different distance measures for illuminant estimates. The output images are the resulting color corrected images using the illuminant estimates of the color constancy algorithms, and the quality of these images is determined by human observers. Distance measures are analyzed how they mimic differences in color naturalness of images as obtained by humans.
Ieee Transactions on Image Processing, 2002
We introduce a context for testing computational color constancy, specify our approach to the implementation of a number of the leading algorithms, and report the results of three experiments using synthesized data. Experiments using synthesized data are important because the ground truth is known, possible confounds due to camera characterization and pre-processing are absent, and various factors affecting color constancy can be efficiently investigated because they can be manipulated individually and precisely.