Perceiving illumination inconsistencies in scenes (original) (raw)

Measuring the perception of light inconsistencies

Proceedings of the 7th Symposium on Applied Perception in Graphics and Visualization - APGV '10, 2010

In this paper we explore the ability of the human visual system to detect inconsistencies in the illumination of objects in images. We specifically focus on objects being lit from different angles as the rest of the image. We present the results of three different tests, two with synthetic objects and a third one with digitally manipulated real images. Our results agree with previous publications exploring the topic, but we extend them by providing quantifiable data which in turn suggest approximate perceptual thresholds. Given that light detection in single images is an ill-posed problem, these thresholds can provide valid error limits to related algorithms in different contexts, such as compositing or augmented reality.

Context-dependent judgments of color that might allow color constancy in scenes with multiple regions of illumination

Journal of the Optical Society of America A, 2012

For a color-constant observer, a change in the spectral composition of the illumination is accompanied by a corresponding change in the chromaticity associated with an achromatic percept. However, maintaining color constancy for different regions of illumination within a scene implies the maintenance of multiple perceptual references. We investigated the features of a scene that enable the maintenance of separate perceptual references for two displaced but overlapping chromaticity distributions. The time-averaged, retinotopically localized stimulus was the primary determinant of color appearance judgments. However, spatial separation of test samples additionally served as a symbolic cue that allowed observers to maintain two separate perceptual references.

Misidentifying illuminant changes in natural scenes due to failures in relational colour constancy

Proceedings of the Royal Society B, 2023

The colours of surfaces in a scene may not appear constant with a change in the colour of the illumination. Yet even when colour constancy fails, human observers can usually discriminate changes in lighting from changes in surface reflecting properties. This operational ability has been attributed to the constancy of perceived colour relations between surfaces under illuminant changes, in turn based on approximately invariant spatial ratios of cone photoreceptor excitations. Natural deviations in these ratios may, however, lead to illuminant changes being misidentified. The aim of this work was to test whether such misidentifications occur with natural scenes and whether they are due to failures in relational colour constancy. Pairs of scene images from hyperspectral data were presented side-by-side on a computer-controlled display. On one side, the scene underwent illuminant changes and on the other side, it underwent the same changes but with images corrected for any residual deviations in spatial ratios. Observers systematically misidentified the corrected images as being due to illuminant changes. The frequency of errors increased with the size of the deviations, which were closely correlated with the estimated failures in relational colour constancy.

Suggesting that the illumination differs between two scenes does not enhance color constancy

2012

Color constancy involves correctly attributing a bias in the color of the light reaching your eyes to the illumination, and therefore compensating for it when judging surface reflectance. But not all biases are caused by the illumination, and surface colors will be misjudged if a bias is incorrectly attributed to the illumination. Evidence from within a scene (highlights, shadows, gradients, mutual reflections, etc.) could help determine whether a bias is likely to be due to the illumination. To examine whether the human visual system considers such evidence we asked subjects to match two surfaces on differently colored textured backgrounds. When the backgrounds were visibly rendered on screens in an otherwise dark room, the influence of the difference in background color was modest, indicating that subjects did not attribute much of the difference in color to the illumination. When the simulation of a change in illumination was more realistic, the results were very similar. We conc...

Asymmetric perceptual confounds between canonical lightings and materials

Journal of vision, 2018

To better understand the interactions between material perception and light perception, we further developed our material probe MatMix 1.0 into MixIM 1.0, which allows optical mixing of canonical lighting modes. We selected three canonical lighting modes (ambient, focus, and brilliance) and created scenes to represent the three illuminations. Together with four canonical material modes (matte, velvety, specular, glittery), this resulted in 12 basis images (the "bird set"). These images were optically mixed in our probing method. Three experiments were conducted with different groups of observers. In Experiment 1, observers were instructed to manipulate MixIM 1.0 and match optically mixed lighting modes while discounting the materials. In Experiment 2, observers were shown a pair of stimuli and instructed to simultaneously judge whether the materials and lightings were the same or different in a four-category discrimination task. In Experiment 3, observers performed both th...

The luminance misattribution in lightness perception

Psihologija, 2010

The Simultaneous Lightness Contrast is the condition whereby a grey patch on a dark background appears lighter than a physically identical patch on a light background. This is probably the most studied phenomenon in lightness perception. Although this phenomenon has been explained in terms of low-level mechanisms, convincing evidences supporting a high-level interpretation have been presented over the last decades. Two are the main highlevel interpretations. On one side, the layer approach claims that the visual system splits the luminance into separate overlapping layers, corresponding to separate physical contributions; whilst on the other side, the framework approach maintains that the visual system groups the luminance within a set of contiguous frameworks. One of the biggest weaknesses of the layer approach is that it cannot account properly for errors in lightness perception (Gilchrist, 2005 Current Biology, 15(9), 330-332). To extend the multiple layers interpretation to errors in lightness perception, in this study we show that the perceptual lightness difference among equal patches on different backgrounds increases even when the luminance contrast with their backgrounds shrinks. Specifically, it is shown that the perceptual lightness difference among equal patches on different backgrounds intensifies when a small-sized semi-transparent surface is interposed between the patches and the backgrounds. This result indicates that in these conditions the visual system besides decomposing the luminance into separate layers also becomes liable for a luminance misattribution. It is proposed that the photometric and geometric relationships among the luminance edges in the image might account for this misattribution.

On the concept of error in visual perception: An example from simultaneous lightness contrast

Teorie & modelli

This work deals with the concepts of “error” and “veridicality” in visual perception studies by considering the matching paradigm often employed in empirical research to study simultaneous lightness contrast (SLC). Matching paradigms often employ Neutral Value Munsell scales, and there is a strong tendency in the field to consider these scales as a ruler capable of showing veridical lightness values, that is perfect transformations of reflectance values into perceptual values. If this were the case, then Munsell scales should show high constancy to critical changes inside the visual scene. We performed an experiment in which three groups of observers were asked to perform a matching task for a classic SLC display. Each group used the same Munsell scale (MS) but seen against three different backgrounds: white, black, or white-black chequered. Results showed that the background against which the MS is seen heavily influences matches for the target on the black background of the SLC di...

Color constancy in variegated scenes: role of low-level mechanisms in discounting illumination changes

Journal of The Optical Society of America A-optics Image Science and Vision, 1997

For a visual system to possess color constancy across varying illumination, chromatic signals from a scene must remain constant at some neural stage. We found that photoreceptor and opponent-color signals from a large sample of natural and man-made objects under one kind of natural daylight were almost perfectly correlated with the signals from those objects under every other spectrally different phase of daylight. Consequently, in scenes consisting of many objects, the effect of illumination changes on specific color mechanisms can be simulated by shifting all chromaticities by an additive or multiplicative constant along a theoretical axis. When the effect of the illuminant change was restricted to specific color mechanisms, thresholds for detecting a change in the colors in a scene were significantly elevated in the presence of spatial variations along the same chromatic axis as the simulated chromaticity shift. In a variegated scene, correlations between spatially local chromatic signals across illuminants, and the desensitization caused by eye movements across spatial variations, help the visual system to attenuate the perceptual effects that are due to changes in illumination. © 1997 Optical Society of America [S0740-3232(97)

Human observers compensate for secondary illumination originating in nearby chromatic surfaces

Journal of Vision, 2004

In complex scenes, the light absorbed and re-emitted by one surface can serve as a source of illumination for a second. We examine whether observers systematically discount this secondary illumination when estimating surface color. We asked six naïve observers to make achromatic settings of a small test patch adjacent to a brightly colored orange cube in rendered scenes. The orientation of the test patch with respect to the cube was varied from trial to trial, altering the amount of secondary illumination reaching the test patch. Observers systematically took orientation into account in making their settings, discounting the added secondary illumination more at orientations where it was more intense. Overall, they tended to under-compensate for the added secondary illumination.

A cortical edge-integration model of object-based lightness computation that explains effects of spatial context and individual differences

Frontiers in human neuroscience, 2014

Previous work has demonstrated that perceived surface reflectance (lightness) can be modeled in simple contexts in a quantitatively exact way by assuming that the visual system first extracts information about local, directed steps in log luminance, then spatially integrates these steps along paths through the image to compute lightness (Rudd and Zemach, 2004, 2005, 2007). This method of computing lightness is called edge integration. Recent evidence (Rudd, 2013) suggests that human vision employs a default strategy to integrate luminance steps only along paths from a common background region to the targets whose lightness is computed. This implies a role for gestalt grouping in edge-based lightness computation. Rudd (2010) further showed the perceptual weights applied to edges in lightness computation can be influenced by the observer's interpretation of luminance steps as resulting from either spatial variation in surface reflectance or illumination. This implies a role for to...