Color Constancy Algorithm for Mixed-Illuminant Scene Images (original) (raw)

Evaluation of Color Constancy Algorithms

This paper presents a review on various color constancy (CC) techniques. The CC is a method that gathers the effect of various light sources on a digital image. The scene recorded by a camera relies on three issues: the physical information of the object, the illumination incident on the scene, and the characteristics of the camera. The objective of CC is to account for the effect of the illuminate. Many existing methods such as Grey-world method, Physics based CC and Edge-based methods were used to measure the CC of objects affected by different light source. All these methods have obvious limitations that the light source across the scene is spectrally uniform. This assumption is often violated as there might be more than one light source illuminating the scene. The overall objective of this paper is to find the gaps in earlier work on CC.

Colour Constancy for Image of Non-Uniformly Lit Scenes

Sensors

Digital camera sensors are designed to record all incident light from a captured scene, but they are unable to distinguish between the colour of the light source and the true colour of objects. The resulting captured image exhibits a colour cast toward the colour of light source. This paper presents a colour constancy algorithm for images of scenes lit by non-uniform light sources. The proposed algorithm uses a histogram-based algorithm to determine the number of colour regions. It then applies the K-means++ algorithm on the input image, dividing the image into its segments. The proposed algorithm computes the Normalized Average Absolute Difference (NAAD) for each segment and uses it as a measure to determine if the segment has sufficient colour variations. The initial colour constancy adjustment factors for each segment with sufficient colour variation is calculated. The Colour Constancy Adjustment Weighting Factors (CCAWF) for each pixel of the image are determined by fusing the C...

Automatic color constancy algorithm selection and combination

Pattern Recognition, 2010

In this work, we investigate how illuminant estimation techniques can be improved taking into account intrinsic, low level properties of the images. We show how these properties can be used to drive, given a set of illuminant estimation algorithms, the selection of the best algorithm for a given image. The algorithm selection is made by a decision forest composed of several trees on the basis of the values of a set of heterogeneous features. The features represent the image content in terms of low-level visual properties. The trees are trained to select the algorithm that minimizes the expected error in illuminant estimation. We also designed a combination strategy that estimates the illuminant as a weighted sum of the different algorithms' estimations. Experimental results on the widely used Ciurea and Funt dataset demonstrate the effectiveness of our approach.

A Novel Colour-Constancy Algorithm: A Mixture of Existing Algorithms

2012

Colour constancy algorithms attempt to provide an accurate colour representation of images independent of the illuminant colour used for scene illumination. In this paper we investigate well-known and state-ofthe-art colour-constancy algorithms. We then select a few of these algorithms and combine them using a weighted-sum approach. Four methods are involved in the weights estimation. The first method uniformly distributes the weights among the algorithms. The second one uses learning set of images to train the weights based on errors. The third method searches for a linear combination of all methods' outcomes that minimise the error. The fourth one trains a continuous perceptron, in order to find optimum combination of the methods. In all four approaches, we used a set of 60 images. Each of these images was taken with a Gretag Macbeth colour checker card in the scene, in order to make quantitative evaluation of colour-consistency algorithms. The results obtained show our proposed method outperforms individual algorithms. The best results were obtained using the weights for linear combination and the trained continuous perceptron to combine the algorithms.

Color Constancy for Uniform and Non-Uniform Illuminant Using Image Texture

IEEE Access

Color constancy is the capability to observe the true color of a scene from its image regardless of the scene's illuminant. It is a significant part of the digital image processing pipeline and is utilized when the true color of an object is required. Most existing color constancy methods assume a uniform illuminant across the whole scene of the image, which is not always the case. Hence, their performances are influenced by the presence of multiple light sources. This paper presents a color constancy adjustment technique that uses the texture of the image pixels to select pixels with sufficient color variation to be used for image color correction. The proposed technique applies a histogram-based algorithm to determine the appropriate number of segments to efficiently split the image into its key color variation areas. The K-means ++ algorithm is then used to divide the input image into the predetermined number of segments. The proposed algorithm identifies pixels with sufficient color variation in each segment using the entropies of the pixels, which represent the segment's texture. Then, the algorithm calculates the initial color constancy adjustment factors for each segment by applying an existing statistics-based color constancy algorithm on the selected pixels. Finally, the proposed method computes color adjustment factors per pixel within the image by fusing the initial color adjustment factors of all segments, which are regulated by the Euclidian distances of each pixel from the centers of gravity of the segments. The experimental results on benchmark single-and multiple-illuminant image datasets show that the images that are obtained using the proposed algorithm have significantly higher subjective and very competitive objective qualities compared to those that are obtained with the state-of-the-art techniques.

A comparison of computational color constancy Algorithms. II. Experiments with image data

IEEE Transactions on Image Processing, 2002

We introduce a context for testing computational color constancy, specify our approach to the implementation of a number of the leading algorithms, and report the results of three experiments using synthesized data. Experiments using synthesized data are important because the ground truth is known, possible confounds due to camera characterization and pre-processing are absent, and various factors affecting color constancy can be efficiently investigated because they can be manipulated individually and precisely.

Color Constancy for Scenes with Varying Illumination

Computer Vision and Image Understanding, 1997

We present an algorithm which uses information from both surface reflectance and illumination variation to solve for color constancy. Most color constancy algorithms assume that the illumination across a scene is constant, but this is very often not valid for real images. The method presented in this work identifies and removes the illumination variation, and in addition uses the variation to constrain the solution. The constraint is applied conjunctively to constraints found from surface reflectances. Thus the algorithm can provide good color constancy when there is sufficient variation in surface reflectances, or sufficient illumination variation, or a combination of both. We present the results of running the algorithm on several real scenes, and the results are very encouraging.

Weight‐based colour constancy using contrast stretching

IET Image Processing, 2021

One of the main issues in colour image processing is changing objects' colour due to colour of illumination source. Colour constancy methods tend to modify overall image colour as if it was captured under natural light illumination. Without colour constancy, the colour would be an unreliable cue to object identity. Till now, many methods in colour constancy domain are presented. They are in two categories; statistical methods and learning-based methods. This paper presents a new statistical weighted algorithm for illuminant estimation. Weights are adjusted to highlight two key factors in the image for illuminant estimation, that is contrast and brightness. The focus was on the convex part of the contrast stretching function to create the weights. Moreover, a novel partitioning mechanism in the colour domain that leads to improvement in efficiency is proposed. The proposed algorithm is evaluated on two benchmark linear image databases according to two evaluation metrics. The experimental results showed that it is competitive to the statistical state of the art methods. In addition to its low computational cost, it has the advantage of improving the efficiency of statistics-based algorithms for dark images and images with low brightness contrast. Moreover, it is robust to camera change types. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.

Color constancy and non-uniform illumination: Can existing algorithms work?

2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), 2011

The color and distribution of illuminants can significantly alter the appearance of a scene. The goal of color constancy (CC) is to remove the color bias introduced by the illuminants. Most existing CC algorithms assume a uniformly illuminated scene. However, more often than not, this assumption is an insufficient approximation of realworld illumination conditions (multiple light sources, shadows, interreflections, etc.). Thus, illumination should be locally determined, taking under consideration that multiple illuminants may be present. In this paper we investigate the suitability of adapting 5 state-of-the-art color constancy methods so that they can be used for local illuminant estimation. Given an arbitrary image, we segment it into superpixels of approximately similar color. Each of the methods is applied independently on every superpixel. For improved accuracy, these independent estimates are combined into a single illuminant-color value per superpixel. We evaluated different fusion methodologies. Our experiments indicate that the best performance is obtained by fusion strategies that combine the outputs of the estimators using regression.

A comparison of computational color constancy algorithms--part I: methodology and experiments with synthesized data

Ieee Transactions on Image Processing, 2002

We introduce a context for testing computational color constancy, specify our approach to the implementation of a number of the leading algorithms, and report the results of three experiments using synthesized data. Experiments using synthesized data are important because the ground truth is known, possible confounds due to camera characterization and pre-processing are absent, and various factors affecting color constancy can be efficiently investigated because they can be manipulated individually and precisely.