A comparison of computational color constancy algorithms--part I: methodology and experiments with synthesized data (original) (raw)
Related papers
A comparison of computational color constancy Algorithms. II. Experiments with image data
IEEE Transactions on Image Processing, 2002
We introduce a context for testing computational color constancy, specify our approach to the implementation of a number of the leading algorithms, and report the results of three experiments using synthesized data. Experiments using synthesized data are important because the ground truth is known, possible confounds due to camera characterization and pre-processing are absent, and various factors affecting color constancy can be efficiently investigated because they can be manipulated individually and precisely.
Evaluation of Color Constancy Algorithms
This paper presents a review on various color constancy (CC) techniques. The CC is a method that gathers the effect of various light sources on a digital image. The scene recorded by a camera relies on three issues: the physical information of the object, the illumination incident on the scene, and the characteristics of the camera. The objective of CC is to account for the effect of the illuminate. Many existing methods such as Grey-world method, Physics based CC and Edge-based methods were used to measure the CC of objects affected by different light source. All these methods have obvious limitations that the light source across the scene is spectrally uniform. This assumption is often violated as there might be more than one light source illuminating the scene. The overall objective of this paper is to find the gaps in earlier work on CC.
Computational Color Constancy: Survey and Experiments
IEEE Transactions on Image Processing, 2000
Computational color constancy is a fundamental prerequisite for many computer vision applications. This paper presents a survey of many recent developments and state-of-theart methods. Several criteria are proposed that are used to assess the approaches. A taxonomy of existing algorithms is proposed and methods are separated in three groups: static methods, gamut-based methods and learning-based methods. Further, the experimental setup is discussed including an overview of publicly available data sets. Finally, various freely available methods, of which some are considered to be state-of-the-art, are evaluated on two data sets.
Color by correlation: a simple, unifying framework for color constancy
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2001
AbstractÐThis paper considers the problem of illuminant estimation: how, given an image of a scene, recorded under an unknown light, we can recover an estimate of that light. Obtaining such an estimate is a central part of solving the color constancy problemÐthat is of recovering an illuminant independent representation of the reflectances in a scene. Thus, the work presented here will have applications in fields such as color-based object recognition and digital photography, where solving the color constancy problem is important. The work in this paper differs from much previous work in that, rather than attempting to recover a single estimate of the illuminant as many previous authors have done, we instead set out to recover a measure of the likelihood that each of a set of possible illuminants was the scene illuminant. We begin by determining which image colors can occur (and how these colors are distributed) under each of a set of possible lights. We discuss in the paper how, for a given camera, we can obtain this knowledge. We then correlate this information with the colors in a particular image to obtain a measure of the likelihood that each of the possible lights was the scene illuminant. Finally, we use this likelihood information to choose a single light as an estimate of the scene illuminant. Computation is expressed and performed in a generic correlation framework which we develop in this paper. We propose a new probabilistic instantiation of this correlation framework and we show that it delivers very good color constancy on both synthetic and real images. We further show that the proposed framework is rich enough to allow many existing algorithms to be expressed within it: the gray-world and gamut-mapping algorithms are presented in this framework and we also explore the relationship of these algorithms to other probabilistic and neural network approaches to color constancy.
A Novel Colour-Constancy Algorithm: A Mixture of Existing Algorithms
2012
Colour constancy algorithms attempt to provide an accurate colour representation of images independent of the illuminant colour used for scene illumination. In this paper we investigate well-known and state-ofthe-art colour-constancy algorithms. We then select a few of these algorithms and combine them using a weighted-sum approach. Four methods are involved in the weights estimation. The first method uniformly distributes the weights among the algorithms. The second one uses learning set of images to train the weights based on errors. The third method searches for a linear combination of all methods' outcomes that minimise the error. The fourth one trains a continuous perceptron, in order to find optimum combination of the methods. In all four approaches, we used a set of 60 images. Each of these images was taken with a Gretag Macbeth colour checker card in the scene, in order to make quantitative evaluation of colour-consistency algorithms. The results obtained show our proposed method outperforms individual algorithms. The best results were obtained using the weights for linear combination and the trained continuous perceptron to combine the algorithms.
Color Constancy Algorithms: Psychophysical Evaluation on a New Dataset
Journal of Imaging Science and Technology, 2009
The estimation of the illuminant of a scene from a digital image has been the goal of a large amount of research in computer vision. Color constancy algorithms have dealt with this problem by defining different heuristics to select a unique solution from within the feasible set. The performance of these algorithms has shown that there is still a long way to go to globally solve this problem as a preliminary step in computer vision. In general, performance evaluation has been done by comparing the angular error between the estimated chromaticity and the chromaticity of a canonical illuminant, which is highly dependent on the image dataset. Recently, some workers have used high-level constraints to estimate illuminants; in this case selection is based on increasing the performance on the subsequent steps of the systems. In this paper the authors propose a new performance measure, the perceptual angular error. It evaluates the performance of a color constancy algorithm according to the perceptual preferences of humans, or naturalness (instead of the actual optimal solution) and is independent of the visual task. We show the results of a new psychophysical experiment comparing solutions from three different color constancy algorithms. Our results show that in more than half of the judgments the preferred solution is not the one closest to the optimal solution. Our experiments were performed on a new dataset of images acquired with a calibrated camera with an attached neutral gray sphere, which better copes with the illuminant variations of the scene.
Yaccd: Yet another color constancy database
Proc. of SPIE Vol, 2003
Different image databases have been developed so far to test algorithms of color constancy. Each of them differs in the image characteristics, according to the features to test. In this paper we present a new image database, created at the University of Milano. Since a database cannot contain all the types of possible images, to limit the number of images it is necessary to make some choices and these choices should be as neutral as possible. E.g. a database whose images always contain a white area is suitable for algorithms based on the White Patch approach; on the contrary, the complete absence of white areas can exploit algorithms with alternative approaches. Thus, the first image detail that we have addressed is the background. Which is the more convenient background for a color constancy test database? This choice can be affected by the goal of the color correction algorithms. In developing this DB we tried to consider a large number of possible approaches considering color constancy in a broader sense. Images under standard illuminants are presented together with particular non-standard light sources. In particular we collect two groups of lamps: a group of characterized neon lamps with a weak color cast, suitable to test the precision of a color correction algorithm and a group of tungsten bulbs with a colored coating and strong color casts, very difficult to remove, suitable to test robustness and efficacy. Another interesting feature is the presence of shadows. The presence of different lightness levels in the same image allows to test the local effects of the color correction algorithms. The proposed DB can be used to test algorithms to recover the corresponding color under standard reference illuminants (e.g. D65), or alternatively assuming a visual appearance approach, to test algorithms for their capability to minimize color variations across the different illuminants, performing in this way a perceptual color constancy. This second approach is used to present preliminary tests. The IDB will be made available on the web.
Automatic color constancy algorithm selection and combination
Pattern Recognition, 2010
In this work, we investigate how illuminant estimation techniques can be improved taking into account intrinsic, low level properties of the images. We show how these properties can be used to drive, given a set of illuminant estimation algorithms, the selection of the best algorithm for a given image. The algorithm selection is made by a decision forest composed of several trees on the basis of the values of a set of heterogeneous features. The features represent the image content in terms of low-level visual properties. The trees are trained to select the algorithm that minimizes the expected error in illuminant estimation. We also designed a combination strategy that estimates the illuminant as a weighted sum of the different algorithms' estimations. Experimental results on the widely used Ciurea and Funt dataset demonstrate the effectiveness of our approach.
Color Constancy for Scenes with Varying Illumination
Computer Vision and Image Understanding, 1997
We present an algorithm which uses information from both surface reflectance and illumination variation to solve for color constancy. Most color constancy algorithms assume that the illumination across a scene is constant, but this is very often not valid for real images. The method presented in this work identifies and removes the illumination variation, and in addition uses the variation to constrain the solution. The constraint is applied conjunctively to constraints found from surface reflectances. Thus the algorithm can provide good color constancy when there is sufficient variation in surface reflectances, or sufficient illumination variation, or a combination of both. We present the results of running the algorithm on several real scenes, and the results are very encouraging.