Jean-baptiste Thomas - Academia.edu (original) (raw)
Papers by Jean-baptiste Thomas
Journal of the Optical Society of America A
We propose a series of modifications to the Barten contrast sensitivity function model for periph... more We propose a series of modifications to the Barten contrast sensitivity function model for peripheral vision based on anatomical and psychophysical studies. These modifications result in a luminance pattern detection model that could quantitatively describe the extent of veridical pattern resolution and the aliasing zone. We evaluated our model against psychophysical measurements in peripheral vision. Our numerical assessment shows that the modified Barten leads to lower estimate errors than its original version.
Remote Sensing
Documenting the inter-annual variability and the long-term trend of the glacier snow line altitud... more Documenting the inter-annual variability and the long-term trend of the glacier snow line altitude is highly relevant to document the evolution of glacier mass changes. Automatically identifying the snow line on glaciers is challenging; recent developments in machine learning approaches show promise to tackle this issue. This manuscript presents a proof of concept of machine learning approaches applied to multi-spectral images to detect the snow line and quantify its average altitude. The tested approaches include the combination of different image processing and classification methods, and takes into account cast shadows. The efficiency of these approaches is evaluated on mountain glaciers in the European Alps by comparing the results with manually annotated data. Solutions provided by the different approaches are robust when compared to the ground truth’s snow lines, with a Pearson’s correlation ranging from 79% to 96% depending on the method. However, the tested approaches may fa...
Fifteenth International Conference on Quality Control by Artificial Vision
This paper aims to evaluate the visual quality of the dynamic relighting of manufactured surfaces... more This paper aims to evaluate the visual quality of the dynamic relighting of manufactured surfaces from Reflectance Transformation Imaging acquisitions. The first part of the study aimed to define the optimum parameters of acquisition using the RTI system: Exposure time, Gain, Sampling density. The second part is the psychometric experiment using the Design of Experiments approach. The results of this study help us to determine the influence of the parameters associated with the acquisition of Reflectance Transformation Imaging data, the models associated with relighting, and the dynamic perception of the resulting videos.
Results with D65, including noise. Originally published in JOSA A on 01 July 2017 (josaa-34-7-1085)
Results with D65, without noise. Originally published in JOSA A on 01 July 2017 (josaa-34-7-1085)
2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA), 2017
Snapshot multispectral cameras that are equipped with filter arrays acquire a raw image that repr... more Snapshot multispectral cameras that are equipped with filter arrays acquire a raw image that represents the radiance of a scene over the electromagnetic spectrum at video rate. These cameras require a demosaicing procedure to estimate a multispectral image with full spatio-spectral definition. Such a procedure is based on spectral correlation properties that are sensitive to illumination. In this paper, we first highlight the influence of illumination on demosaicing performances. Then we propose camera-, illumination-, and raw image-based normalisations that make demosaicing robust to illumination. Experimental results on state-of-the-art demosaicing algorithms show that such normalisations improve the quality of multispectral images estimated from raw images acquired under various illuminations.
2018 Colour and Visual Computing Symposium (CVCS), 2018
Spectral Filter Arrays allow snapshot multispectral acquisition within a compact camera. While Ba... more Spectral Filter Arrays allow snapshot multispectral acquisition within a compact camera. While Bayer filter mosaic is a widely accepted standard for color filter arrays, no single mosaic pattern is considered dominant for spectral filter arrays. We compare different patterns for 8-band mosaics, and their overall performance in terms of spectral reconstruction, as well as color and structure reproduction. We demonstrate that some mosaics having overrepresentation of certain filters perform better than the ones with overrepresentaiton of other filters, while the arrays having all filters equally represented perform better than the arrays with overrepresentation.
2018 Colour and Visual Computing Symposium (CVCS), 2018
We compare a recent dehazing method based on deep learning, Dehazenet, with traditional state-of-... more We compare a recent dehazing method based on deep learning, Dehazenet, with traditional state-of-the-art approaches, on benchmark data with reference. Dehazenet estimates the depth map from transmission factor on a single color image, which is used to inverse the Koschmieder model of imaging in the presence of haze. In this sense, the solution is still attached to the Koschmieder model. We demonstrate that the transmission is very well estimated by the network, but also that this method exhibits the same limitation than others due to the use of the same imaging model.
Inspired by the concept of the colour filter array (CFA), the research community has shown much i... more Inspired by the concept of the colour filter array (CFA), the research community has shown much interest in adapting the idea of CFA to the multispectral domain, producing multispectral filter arrays (MSFAs). In addition to newly devised methods of MSFA demosaicking, there exists a wide spectrum of methods developed for CFA. Among others, some vector based operations can be adapted naturally for multispectral purposes. In this paper, we focused on studying two vector based median filtering methods in the context of MSFA demosaicking. One solves demosaicking problems by means of vector median filters, and the other applies median filtering to the demosaicked image in spherical space as a subsequent refinement process to reduce artefacts introduced by demosaicking. To evaluate the performance of these measures, a tool kit was constructed with the capability of mosaicking, demosaicking and quality assessment. The experimental results demonstrated that the vector median filtering performed less well for natural images except black and white images, however the refinement step reduced the reproduction error numerically in most cases. This proved the feasibility of extending CFA demosaicking into MSFA domain.
Journal of Imaging Science and Technology, 2018
In this article, the authors introduce a new color image database, CHIC (Color Hazy Images for Co... more In this article, the authors introduce a new color image database, CHIC (Color Hazy Images for Comparison), devoted to haze model assessment and dehazing method evaluation. For three real scenes, they provide two illumination conditions and several densities of real fog. The main interest lies in the availability of several metadata parameters such as the distance from the camera to the objects in the scene, the image radiance and the fog density through fog transmittance. For each scene, the fog-free (ground-truth) image is also available, which allows an objective comparison of the resulting image enhancement and potential shortcomings of the model. Five different dehazing methods are benchmarked on three intermediate levels of fog using existing image quality assessment (IQA) metrics with reference to the provided fog-free image. This provides a basis for the evaluation of dehazing methods across fog densities as well as the effectiveness of existing dehazing dedicated IQA metrics. The results indicate that more attention should be given to dehazing methods and the evaluation of metrics to meet an optimal level of image quality. This database and its description are freely available at the web address http://chic.u-bourgogne.fr.
Lecture Notes in Computer Science, 2018
The haze model, which describes the degradation of atmospheric visibility, is a good approximatio... more The haze model, which describes the degradation of atmospheric visibility, is a good approximation for a wide range of weather conditions and several situations. However, it misrepresents the perceived scenes and causes therefore undesirable results on dehazed images at high densities of fog. In this paper, using data from CHIC database, we investigate the possibility to screen the regions of the hazy image, where the haze model inversion is likely to fail in providing perceptually recognized colors. This study is done upon the perceived correlation between the atmospheric light color and the objects’ colors at various fog densities. Accordingly, at high densities of fog, the colors are badly recovered and do not match the original fog-free image. At low fog densities, the haze model inversion provides acceptable results for a large panel of colors.
Lecture Notes in Computer Science, 2018
We review the physics based illuminant estimation methods, which extract information from highlig... more We review the physics based illuminant estimation methods, which extract information from highlights in images. Such highlights are caused by specular reflection from the surface of dielectric materials, and according to the dichromatic reflection model, provide cues about the illumination. This paper analyzes different categories of highlight based illuminant estimation techniques for color images from the point of view of their extension to multispectral imaging. We find that the use of chromaticity space for multispectral imaging is not straightforward and imposing constraints on illuminants in the multispectral imaging domain may not be efficient either. We identify some methods that are feasible for extension to multispectral imaging, and discuss the advantage of using highlight information for illuminant estimation.
Image Analysis, 2017
Multispectral single shot imaging systems can benefit computer vision applications in needs of a ... more Multispectral single shot imaging systems can benefit computer vision applications in needs of a compact and affordable imaging system. Spectral filter arrays technology meets the requirement, but can lead to artifacts due to inhomogeneous intensity levels between spectral channels due to filter manufacturing constraints, illumination and object properties. One solution to solve this problem is to use high dynamic range imaging techniques on these sensors. We define a spectral imaging pipeline that incorporates high dynamic range, demosaicing and color image visualization. Qualitative evaluation is based on real images captured with a prototype of spectral filter array sensor in the visible and near infrared.
Journal of Vision, 2021
Translucency is an optical and a perceptual phenomenon that characterizes subsurface light transp... more Translucency is an optical and a perceptual phenomenon that characterizes subsurface light transport through objects and materials. Translucency as an optical property of a material relates to the radiative transfer inside and through this medium, and translucency as a perceptual phenomenon describes the visual sensation experienced by humans when observing a given material under given conditions. The knowledge about the visual mechanisms of the translucency perception remains limited. Accurate prediction of the appearance of the translucent objects can have a significant commercial impact in the fields such as three-dimensional printing. However, little is known how the optical properties of a material relate to a perception evoked in humans. This article overviews the knowledge status about the visual perception of translucency and highlights the applications of the translucency perception research. Furthermore, this review summarizes current knowledge gaps, fundamental challenges and existing ambiguities with a goal to facilitate translucency perception research in the future.
Lecture Notes in Computer Science, 2020
We introduce a new database to promote visibility enhancement techniques intended for spectral im... more We introduce a new database to promote visibility enhancement techniques intended for spectral image dehazing. SHIA (Spectral Hazy Image database for Assessment) is composed of two real indoor scenes M1 and M2 of 10 levels of fog each and their corresponding fog-free (ground-truth) images, taken in the visible and the near infrared ranges every 10 nm starting from 450 to 1000 nm. The number of images that form SHIA is 1540 with a size of 1312 × 1082 pixels. All images are captured under the same illumination conditions. Three of the well-known dehazing image methods based on different approaches were adjusted and applied on the spectral foggy images. This study confirms once again a strong dependency between dehazing methods and fog densities. It urges the design of spectral-based image dehazing able to handle simultaneously the accurate estimation of the parameters of the visibility degradation model and the limitation of artifacts and post-dehazing noise. The database can be downloaded freely at http://chic.u-bourgogne.fr.
Spectral filter array emerges as a multispectral imaging technology, which could benefit several ... more Spectral filter array emerges as a multispectral imaging technology, which could benefit several applications. Although several instantiations are prototyped and commercialized, there are yet only a few raw data available that could serve research and help to evaluate and design adequate related image processing and algorithms. This document presents a freely available spectral filter array database of images that combine visible and near infra-red information.
We investigated the impact of simulated individual observer colour matching functions (CMFs) on c... more We investigated the impact of simulated individual observer colour matching functions (CMFs) on computational texture features. We hypothesised that most humans perceive texture in a similar manner, hence a texture indicator that is the least dependent on individual physiology of human vision would be most likely a potential fit to serve as quantified visually perceived texture. To this end, the following strategy was implemented: hyper-spectral image textures were converted into XYZ images for individual observer CMFs, contrast sensitivity function (CSF) filtering was subsequently applied on the XYZ images for visual simulation. Two types of texture features were extracted from the filtered images. Finally, the difference between the texture features were analysed for observers with disparity in their CMFs.
Visualization of the color content of a painting can help to better understand the style, composi... more Visualization of the color content of a painting can help to better understand the style, compositional structure and material content. There are several ways to visualize colorimetric data from a color image. One option consists of using of 3D Virtual Reality to view colorimetric data in arbitrary orientation in a standard color space. In this paper we propose a new colorimetric visualization method. The originality of this method is that we include spatial organization of colors inside the painting. We can thus visualize information on color gradients that may appear in the painting using simple 3D primitives. We demonstrate the efficiency of our method on a colorimetrically calibrated image of an Italian Renaissance painting.
Bad environmental conditions like bad weather, such as fog and haze, and smoke-filled monitored c... more Bad environmental conditions like bad weather, such as fog and haze, and smoke-filled monitored closed areas, cause a degradation and a loss in contrast and color information in images. Unlike outdoor scenes imaged in a foggy day, an indoor artificial hazy scene can be acquired in controlled conditions, while the clear image is always available when the smoke is dispersed. This can help to investigate models of haze and evaluate dehazing algorithms. Thus, an artificial indoor scene was set up in a closed area with a mean to control the amount of haze within this scene. While a convergence model simulates correctly a small amount of haze, it fails to reproduce the same perceived hazy colors of the real image when haze density is high. This difference becomes obvious when the same dehazing method is applied to both images. Unlike simulated images, colors in real hazy images are resulted from environmental illuminants interference.
Perception of appearance of different materials and objects is a complex psychophysical phenomeno... more Perception of appearance of different materials and objects is a complex psychophysical phenomenon and its neurophysiological and behavioral mechanisms are far from being fully understood. The various appearance attributes are usually studied separately. In addition, no comprehensive and functional total appearance modelling has been done up-to date. We have conducted experiments using physical objects asking observers to describe the objects and carry out visual tasks. The process has been videotaped and analysed qualitatively using the Grounded Theory Analysis, a qualitative research methodology from social science. In this work, we construct a qualitative model of this data and compare it to material appearance models. The model highlights the impact of the conditions of observation, and the necessity of a reference and comparison for adequate assessment of material appearance. Then we formulate a set of research hypotheses. While our model only describes our data, the hypotheses...
Journal of the Optical Society of America A
We propose a series of modifications to the Barten contrast sensitivity function model for periph... more We propose a series of modifications to the Barten contrast sensitivity function model for peripheral vision based on anatomical and psychophysical studies. These modifications result in a luminance pattern detection model that could quantitatively describe the extent of veridical pattern resolution and the aliasing zone. We evaluated our model against psychophysical measurements in peripheral vision. Our numerical assessment shows that the modified Barten leads to lower estimate errors than its original version.
Remote Sensing
Documenting the inter-annual variability and the long-term trend of the glacier snow line altitud... more Documenting the inter-annual variability and the long-term trend of the glacier snow line altitude is highly relevant to document the evolution of glacier mass changes. Automatically identifying the snow line on glaciers is challenging; recent developments in machine learning approaches show promise to tackle this issue. This manuscript presents a proof of concept of machine learning approaches applied to multi-spectral images to detect the snow line and quantify its average altitude. The tested approaches include the combination of different image processing and classification methods, and takes into account cast shadows. The efficiency of these approaches is evaluated on mountain glaciers in the European Alps by comparing the results with manually annotated data. Solutions provided by the different approaches are robust when compared to the ground truth’s snow lines, with a Pearson’s correlation ranging from 79% to 96% depending on the method. However, the tested approaches may fa...
Fifteenth International Conference on Quality Control by Artificial Vision
This paper aims to evaluate the visual quality of the dynamic relighting of manufactured surfaces... more This paper aims to evaluate the visual quality of the dynamic relighting of manufactured surfaces from Reflectance Transformation Imaging acquisitions. The first part of the study aimed to define the optimum parameters of acquisition using the RTI system: Exposure time, Gain, Sampling density. The second part is the psychometric experiment using the Design of Experiments approach. The results of this study help us to determine the influence of the parameters associated with the acquisition of Reflectance Transformation Imaging data, the models associated with relighting, and the dynamic perception of the resulting videos.
Results with D65, including noise. Originally published in JOSA A on 01 July 2017 (josaa-34-7-1085)
Results with D65, without noise. Originally published in JOSA A on 01 July 2017 (josaa-34-7-1085)
2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA), 2017
Snapshot multispectral cameras that are equipped with filter arrays acquire a raw image that repr... more Snapshot multispectral cameras that are equipped with filter arrays acquire a raw image that represents the radiance of a scene over the electromagnetic spectrum at video rate. These cameras require a demosaicing procedure to estimate a multispectral image with full spatio-spectral definition. Such a procedure is based on spectral correlation properties that are sensitive to illumination. In this paper, we first highlight the influence of illumination on demosaicing performances. Then we propose camera-, illumination-, and raw image-based normalisations that make demosaicing robust to illumination. Experimental results on state-of-the-art demosaicing algorithms show that such normalisations improve the quality of multispectral images estimated from raw images acquired under various illuminations.
2018 Colour and Visual Computing Symposium (CVCS), 2018
Spectral Filter Arrays allow snapshot multispectral acquisition within a compact camera. While Ba... more Spectral Filter Arrays allow snapshot multispectral acquisition within a compact camera. While Bayer filter mosaic is a widely accepted standard for color filter arrays, no single mosaic pattern is considered dominant for spectral filter arrays. We compare different patterns for 8-band mosaics, and their overall performance in terms of spectral reconstruction, as well as color and structure reproduction. We demonstrate that some mosaics having overrepresentation of certain filters perform better than the ones with overrepresentaiton of other filters, while the arrays having all filters equally represented perform better than the arrays with overrepresentation.
2018 Colour and Visual Computing Symposium (CVCS), 2018
We compare a recent dehazing method based on deep learning, Dehazenet, with traditional state-of-... more We compare a recent dehazing method based on deep learning, Dehazenet, with traditional state-of-the-art approaches, on benchmark data with reference. Dehazenet estimates the depth map from transmission factor on a single color image, which is used to inverse the Koschmieder model of imaging in the presence of haze. In this sense, the solution is still attached to the Koschmieder model. We demonstrate that the transmission is very well estimated by the network, but also that this method exhibits the same limitation than others due to the use of the same imaging model.
Inspired by the concept of the colour filter array (CFA), the research community has shown much i... more Inspired by the concept of the colour filter array (CFA), the research community has shown much interest in adapting the idea of CFA to the multispectral domain, producing multispectral filter arrays (MSFAs). In addition to newly devised methods of MSFA demosaicking, there exists a wide spectrum of methods developed for CFA. Among others, some vector based operations can be adapted naturally for multispectral purposes. In this paper, we focused on studying two vector based median filtering methods in the context of MSFA demosaicking. One solves demosaicking problems by means of vector median filters, and the other applies median filtering to the demosaicked image in spherical space as a subsequent refinement process to reduce artefacts introduced by demosaicking. To evaluate the performance of these measures, a tool kit was constructed with the capability of mosaicking, demosaicking and quality assessment. The experimental results demonstrated that the vector median filtering performed less well for natural images except black and white images, however the refinement step reduced the reproduction error numerically in most cases. This proved the feasibility of extending CFA demosaicking into MSFA domain.
Journal of Imaging Science and Technology, 2018
In this article, the authors introduce a new color image database, CHIC (Color Hazy Images for Co... more In this article, the authors introduce a new color image database, CHIC (Color Hazy Images for Comparison), devoted to haze model assessment and dehazing method evaluation. For three real scenes, they provide two illumination conditions and several densities of real fog. The main interest lies in the availability of several metadata parameters such as the distance from the camera to the objects in the scene, the image radiance and the fog density through fog transmittance. For each scene, the fog-free (ground-truth) image is also available, which allows an objective comparison of the resulting image enhancement and potential shortcomings of the model. Five different dehazing methods are benchmarked on three intermediate levels of fog using existing image quality assessment (IQA) metrics with reference to the provided fog-free image. This provides a basis for the evaluation of dehazing methods across fog densities as well as the effectiveness of existing dehazing dedicated IQA metrics. The results indicate that more attention should be given to dehazing methods and the evaluation of metrics to meet an optimal level of image quality. This database and its description are freely available at the web address http://chic.u-bourgogne.fr.
Lecture Notes in Computer Science, 2018
The haze model, which describes the degradation of atmospheric visibility, is a good approximatio... more The haze model, which describes the degradation of atmospheric visibility, is a good approximation for a wide range of weather conditions and several situations. However, it misrepresents the perceived scenes and causes therefore undesirable results on dehazed images at high densities of fog. In this paper, using data from CHIC database, we investigate the possibility to screen the regions of the hazy image, where the haze model inversion is likely to fail in providing perceptually recognized colors. This study is done upon the perceived correlation between the atmospheric light color and the objects’ colors at various fog densities. Accordingly, at high densities of fog, the colors are badly recovered and do not match the original fog-free image. At low fog densities, the haze model inversion provides acceptable results for a large panel of colors.
Lecture Notes in Computer Science, 2018
We review the physics based illuminant estimation methods, which extract information from highlig... more We review the physics based illuminant estimation methods, which extract information from highlights in images. Such highlights are caused by specular reflection from the surface of dielectric materials, and according to the dichromatic reflection model, provide cues about the illumination. This paper analyzes different categories of highlight based illuminant estimation techniques for color images from the point of view of their extension to multispectral imaging. We find that the use of chromaticity space for multispectral imaging is not straightforward and imposing constraints on illuminants in the multispectral imaging domain may not be efficient either. We identify some methods that are feasible for extension to multispectral imaging, and discuss the advantage of using highlight information for illuminant estimation.
Image Analysis, 2017
Multispectral single shot imaging systems can benefit computer vision applications in needs of a ... more Multispectral single shot imaging systems can benefit computer vision applications in needs of a compact and affordable imaging system. Spectral filter arrays technology meets the requirement, but can lead to artifacts due to inhomogeneous intensity levels between spectral channels due to filter manufacturing constraints, illumination and object properties. One solution to solve this problem is to use high dynamic range imaging techniques on these sensors. We define a spectral imaging pipeline that incorporates high dynamic range, demosaicing and color image visualization. Qualitative evaluation is based on real images captured with a prototype of spectral filter array sensor in the visible and near infrared.
Journal of Vision, 2021
Translucency is an optical and a perceptual phenomenon that characterizes subsurface light transp... more Translucency is an optical and a perceptual phenomenon that characterizes subsurface light transport through objects and materials. Translucency as an optical property of a material relates to the radiative transfer inside and through this medium, and translucency as a perceptual phenomenon describes the visual sensation experienced by humans when observing a given material under given conditions. The knowledge about the visual mechanisms of the translucency perception remains limited. Accurate prediction of the appearance of the translucent objects can have a significant commercial impact in the fields such as three-dimensional printing. However, little is known how the optical properties of a material relate to a perception evoked in humans. This article overviews the knowledge status about the visual perception of translucency and highlights the applications of the translucency perception research. Furthermore, this review summarizes current knowledge gaps, fundamental challenges and existing ambiguities with a goal to facilitate translucency perception research in the future.
Lecture Notes in Computer Science, 2020
We introduce a new database to promote visibility enhancement techniques intended for spectral im... more We introduce a new database to promote visibility enhancement techniques intended for spectral image dehazing. SHIA (Spectral Hazy Image database for Assessment) is composed of two real indoor scenes M1 and M2 of 10 levels of fog each and their corresponding fog-free (ground-truth) images, taken in the visible and the near infrared ranges every 10 nm starting from 450 to 1000 nm. The number of images that form SHIA is 1540 with a size of 1312 × 1082 pixels. All images are captured under the same illumination conditions. Three of the well-known dehazing image methods based on different approaches were adjusted and applied on the spectral foggy images. This study confirms once again a strong dependency between dehazing methods and fog densities. It urges the design of spectral-based image dehazing able to handle simultaneously the accurate estimation of the parameters of the visibility degradation model and the limitation of artifacts and post-dehazing noise. The database can be downloaded freely at http://chic.u-bourgogne.fr.
Spectral filter array emerges as a multispectral imaging technology, which could benefit several ... more Spectral filter array emerges as a multispectral imaging technology, which could benefit several applications. Although several instantiations are prototyped and commercialized, there are yet only a few raw data available that could serve research and help to evaluate and design adequate related image processing and algorithms. This document presents a freely available spectral filter array database of images that combine visible and near infra-red information.
We investigated the impact of simulated individual observer colour matching functions (CMFs) on c... more We investigated the impact of simulated individual observer colour matching functions (CMFs) on computational texture features. We hypothesised that most humans perceive texture in a similar manner, hence a texture indicator that is the least dependent on individual physiology of human vision would be most likely a potential fit to serve as quantified visually perceived texture. To this end, the following strategy was implemented: hyper-spectral image textures were converted into XYZ images for individual observer CMFs, contrast sensitivity function (CSF) filtering was subsequently applied on the XYZ images for visual simulation. Two types of texture features were extracted from the filtered images. Finally, the difference between the texture features were analysed for observers with disparity in their CMFs.
Visualization of the color content of a painting can help to better understand the style, composi... more Visualization of the color content of a painting can help to better understand the style, compositional structure and material content. There are several ways to visualize colorimetric data from a color image. One option consists of using of 3D Virtual Reality to view colorimetric data in arbitrary orientation in a standard color space. In this paper we propose a new colorimetric visualization method. The originality of this method is that we include spatial organization of colors inside the painting. We can thus visualize information on color gradients that may appear in the painting using simple 3D primitives. We demonstrate the efficiency of our method on a colorimetrically calibrated image of an Italian Renaissance painting.
Bad environmental conditions like bad weather, such as fog and haze, and smoke-filled monitored c... more Bad environmental conditions like bad weather, such as fog and haze, and smoke-filled monitored closed areas, cause a degradation and a loss in contrast and color information in images. Unlike outdoor scenes imaged in a foggy day, an indoor artificial hazy scene can be acquired in controlled conditions, while the clear image is always available when the smoke is dispersed. This can help to investigate models of haze and evaluate dehazing algorithms. Thus, an artificial indoor scene was set up in a closed area with a mean to control the amount of haze within this scene. While a convergence model simulates correctly a small amount of haze, it fails to reproduce the same perceived hazy colors of the real image when haze density is high. This difference becomes obvious when the same dehazing method is applied to both images. Unlike simulated images, colors in real hazy images are resulted from environmental illuminants interference.
Perception of appearance of different materials and objects is a complex psychophysical phenomeno... more Perception of appearance of different materials and objects is a complex psychophysical phenomenon and its neurophysiological and behavioral mechanisms are far from being fully understood. The various appearance attributes are usually studied separately. In addition, no comprehensive and functional total appearance modelling has been done up-to date. We have conducted experiments using physical objects asking observers to describe the objects and carry out visual tasks. The process has been videotaped and analysed qualitatively using the Grounded Theory Analysis, a qualitative research methodology from social science. In this work, we construct a qualitative model of this data and compare it to material appearance models. The model highlights the impact of the conditions of observation, and the necessity of a reference and comparison for adequate assessment of material appearance. Then we formulate a set of research hypotheses. While our model only describes our data, the hypotheses...