Vlad Cardei - Academia.edu (original) (raw)
Papers by Vlad Cardei
Connection, 2001
A Hebbian-inspired, competitive network is presented which learns to predict the typical semantic... more A Hebbian-inspired, competitive network is presented which learns to predict the typical semantic features of denoting terms in simple and moderately complex sentences. In addition, the network learns to predict the appearance of syntactically key words, such as ...
We show how to achieve better illumination estimates for color constancy by combining the results... more We show how to achieve better illumination estimates for color constancy by combining the results of several existing algorithms. We consider committee methods based on both linear and non--linear ways of combining the illumination estimates from the original set of color constancy algorithms. Committees of grayworld, white patch and neural net methods are tested. The committee results are always more accurate than the estimates of any of the other algorithms taken in isolation.
Color correcting images of unknown origin (e.g. downloaded from the Internet) adds additional cha... more Color correcting images of unknown origin (e.g. downloaded from the Internet) adds additional challenges to the already difficult problem of color correction, because neither the pre-processing the image was subjected to, nor the camera sensors or camera balance are known. In this paper, we propose a framework of dealing with some aspects of this type of image. In particular, we discuss the issue of color correction of images where an unknown `gamma' non-linearity may be present. We show that the diagonal model, used for color correcting linear images, also works in the case of gamma corrected images. We also discuss the influence that unknown sensors and unknown camera balance has on color constancy algorithms.
Perspectives in Neural Computing, 1998
Human Vision and Electronic Imaging IV, 1999
Why do we have colour? What use is it to us? Some of the obvious answers are that we see colour s... more Why do we have colour? What use is it to us? Some of the obvious answers are that we see colour so that we can recognise objects, spot objects more quickly, tell when fruit is ripe or rotten. These reasons make a lot of sense, but are there others? In this paper, we explore the things that colour makes easier for computational vision systems. In particular, we examine the role of colour in understanding specularities, processing interreflections, identifiying metals from plastics and wet surfaces from dry ones, choosing foveation points, disambiguating stereo matches, discriminating textures and identifying objects. Of course, what is easier for a computational vision system is not necessarily the same for the human visual system but it can perhaps help us create some hypotheses about the role of colour in human perception. We also consider the role of colour constancy in terms of whether or not it is required for colour to be useful to a computer vision system.
Journal of the Optical Society of America A, 2002
Neural Networks, 1999
A connectionist-inspired, parallel processing network is presented which learns, on the basis of ... more A connectionist-inspired, parallel processing network is presented which learns, on the basis of (relevantly) sparse input, to assign meaning interpretations to novel test sentences in both active and passive voice. Training and test sentences are generated from a simple recursive grammar, but once trained, the network successfully processes thousands of sentences containing deeply embedded clauses. All training is unsupervised with regard to error feedback - only Hebbian and self-organizing forms of training are employed. In addition, the active-passive distinction is acquired without any supervised provision of cues or flags (in the output layer) that indicate whether the input sentence is in active or passive sentence. In more detail: (1) The model learns on the basis of a corpus of about 1000 sentences while the set of potential test sentences contains over 100 million sentences. (2) The model generalizes its capacity to interpret active and passive sentences to substantially deeper levels of clausal embedding. (3) After training, the model satisfies criteria for strong syntactic and strong semantic systematicity that humans also satisfy. (4) Symbolic message passing occurs within the model's output layer. This symbolic aspect reflects certain prior language acquistion assumptions.
IEEE Transactions on Image Processing, 2002
Connection Science, 2001
A Hebbian-inspired, competitive network is presented which learns to predict the typical semantic... more A Hebbian-inspired, competitive network is presented which learns to predict the typical semantic features of denoting terms in simple and moderately complex sentences. In addition, the network learns to predict the appearance of syntactically key words, such as ...
ABSTRACT this paper we will assume that the chromaticity of the scene illumination is constant th... more ABSTRACT this paper we will assume that the chromaticity of the scene illumination is constant throughout the image, although its intensity may vary. The goal of a machine color constancy system will be taken to be the accurate estimation of the chromaticity of the scene illumination from a three-band, RGB digital color image of the scene. To achieve this goal, we developed a system based on a multilayer neural network. The network works with the chromaticity histogram of the input image and computes an estimate of the scene's illumination
ABSTRACT Color correcting images of unknown origin (e.g. downloaded from the Internet) adds addit... more ABSTRACT Color correcting images of unknown origin (e.g. downloaded from the Internet) adds additional challenges to the already difficult problem of color correction, because neither the pre-processing the image was subjected to, nor the camera sensors or camera balance are known. In this paper, we propose a framework of dealing with some aspects of this type of image. In particular, we discuss the issue of color correction of images where an unknown `gamma' non-linearity may be present. We show that the diagonal model, used for color correcting linear images, also works in the case of gamma corrected images. We also discuss the influence that unknown sensors and unknown camera balance has on color constancy algorithms.
Connection, 2001
A Hebbian-inspired, competitive network is presented which learns to predict the typical semantic... more A Hebbian-inspired, competitive network is presented which learns to predict the typical semantic features of denoting terms in simple and moderately complex sentences. In addition, the network learns to predict the appearance of syntactically key words, such as ...
We show how to achieve better illumination estimates for color constancy by combining the results... more We show how to achieve better illumination estimates for color constancy by combining the results of several existing algorithms. We consider committee methods based on both linear and non--linear ways of combining the illumination estimates from the original set of color constancy algorithms. Committees of grayworld, white patch and neural net methods are tested. The committee results are always more accurate than the estimates of any of the other algorithms taken in isolation.
Color correcting images of unknown origin (e.g. downloaded from the Internet) adds additional cha... more Color correcting images of unknown origin (e.g. downloaded from the Internet) adds additional challenges to the already difficult problem of color correction, because neither the pre-processing the image was subjected to, nor the camera sensors or camera balance are known. In this paper, we propose a framework of dealing with some aspects of this type of image. In particular, we discuss the issue of color correction of images where an unknown `gamma' non-linearity may be present. We show that the diagonal model, used for color correcting linear images, also works in the case of gamma corrected images. We also discuss the influence that unknown sensors and unknown camera balance has on color constancy algorithms.
Perspectives in Neural Computing, 1998
Human Vision and Electronic Imaging IV, 1999
Why do we have colour? What use is it to us? Some of the obvious answers are that we see colour s... more Why do we have colour? What use is it to us? Some of the obvious answers are that we see colour so that we can recognise objects, spot objects more quickly, tell when fruit is ripe or rotten. These reasons make a lot of sense, but are there others? In this paper, we explore the things that colour makes easier for computational vision systems. In particular, we examine the role of colour in understanding specularities, processing interreflections, identifiying metals from plastics and wet surfaces from dry ones, choosing foveation points, disambiguating stereo matches, discriminating textures and identifying objects. Of course, what is easier for a computational vision system is not necessarily the same for the human visual system but it can perhaps help us create some hypotheses about the role of colour in human perception. We also consider the role of colour constancy in terms of whether or not it is required for colour to be useful to a computer vision system.
Journal of the Optical Society of America A, 2002
Neural Networks, 1999
A connectionist-inspired, parallel processing network is presented which learns, on the basis of ... more A connectionist-inspired, parallel processing network is presented which learns, on the basis of (relevantly) sparse input, to assign meaning interpretations to novel test sentences in both active and passive voice. Training and test sentences are generated from a simple recursive grammar, but once trained, the network successfully processes thousands of sentences containing deeply embedded clauses. All training is unsupervised with regard to error feedback - only Hebbian and self-organizing forms of training are employed. In addition, the active-passive distinction is acquired without any supervised provision of cues or flags (in the output layer) that indicate whether the input sentence is in active or passive sentence. In more detail: (1) The model learns on the basis of a corpus of about 1000 sentences while the set of potential test sentences contains over 100 million sentences. (2) The model generalizes its capacity to interpret active and passive sentences to substantially deeper levels of clausal embedding. (3) After training, the model satisfies criteria for strong syntactic and strong semantic systematicity that humans also satisfy. (4) Symbolic message passing occurs within the model's output layer. This symbolic aspect reflects certain prior language acquistion assumptions.
IEEE Transactions on Image Processing, 2002
Connection Science, 2001
A Hebbian-inspired, competitive network is presented which learns to predict the typical semantic... more A Hebbian-inspired, competitive network is presented which learns to predict the typical semantic features of denoting terms in simple and moderately complex sentences. In addition, the network learns to predict the appearance of syntactically key words, such as ...
ABSTRACT this paper we will assume that the chromaticity of the scene illumination is constant th... more ABSTRACT this paper we will assume that the chromaticity of the scene illumination is constant throughout the image, although its intensity may vary. The goal of a machine color constancy system will be taken to be the accurate estimation of the chromaticity of the scene illumination from a three-band, RGB digital color image of the scene. To achieve this goal, we developed a system based on a multilayer neural network. The network works with the chromaticity histogram of the input image and computes an estimate of the scene's illumination
ABSTRACT Color correcting images of unknown origin (e.g. downloaded from the Internet) adds addit... more ABSTRACT Color correcting images of unknown origin (e.g. downloaded from the Internet) adds additional challenges to the already difficult problem of color correction, because neither the pre-processing the image was subjected to, nor the camera sensors or camera balance are known. In this paper, we propose a framework of dealing with some aspects of this type of image. In particular, we discuss the issue of color correction of images where an unknown `gamma' non-linearity may be present. We show that the diagonal model, used for color correcting linear images, also works in the case of gamma corrected images. We also discuss the influence that unknown sensors and unknown camera balance has on color constancy algorithms.