Juan Domingo - Academia.edu (original) (raw)
Uploads
Papers by Juan Domingo
Pattern Recognition, 2006
This paper presents a new algorithm for image retrieval in content-based image retrieval systems.... more This paper presents a new algorithm for image retrieval in content-based image retrieval systems. The objective of these systems is to get the images which are as similar as possible to a user query from those contained in the global image database without using textual annotations attached to the images. The main problem in obtaining a robust and effective retrieval is the gap between the low level descriptors that can be automatically extracted from the images and the user intention. The algorithm proposed here to address this problem is based on the modeling of user preferences as a probability distribution on the image space. Following a Bayesian methodology, this distribution is the prior distribution and its parameters are modified based on the information provided by the user. This yields the a posteriori from which the predictive distribution is calculated and used to show to the user a new set of images until he/she is satisfied or the target image has been found. Experimental results are shown to evaluate the method on a large image database in terms of precision and recall.
Pattern Recognition Letters, 1997
... This induces poor behaviour in low contrast areas. To minimize this problem we have experimen... more ... This induces poor behaviour in low contrast areas. To minimize this problem we have experimented with locally increasing the image constrast using histogram equalization (Gonzalez and Wintz, 1987), applied not to the whole image at once but to different subparts separately. ...
Pattern Recognition, 2007
This paper deals with the problem of image retrieval from large image databases. A particularly i... more This paper deals with the problem of image retrieval from large image databases. A particularly interesting problem is the retrieval of all images which are similar to one in the user's mind, taking into account his/her feedback which is expressed as positive or negative preferences for the images that the system progressively shows during the search. Here we present a novel algorithm for the incorporation of user preferences in an image retrieval system based exclusively on the visual content of the image, which is stored as a vector of low-level features. The algorithm considers the probability of an image belonging to the set of those sought by the user, and models the logit of this probability as the output of a generalized linear model whose inputs are the low-level image features. The image database is ranked by the output of the model and shown to the user, who selects a few positive and negative samples, repeating the process in an iterative way until he/she is satisfied. The problem of the small sample size with respect to the number of features is solved by adjusting several partial generalized linear models and combining their relevance probabilities by means of an ordered averaged weighted operator. Experiments were made with 40 users and they exhibited good performance in finding a target image (4 iterations on average) in a database of about 4700 images. The mean number of positive and negative examples is of 4 and 6 per iteration. A clustering of users into sets also shows consistent patterns of behavior.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2001
AbstractÐThis paper proposes new descriptors for binary and gray-scale images based on newly defi... more AbstractÐThis paper proposes new descriptors for binary and gray-scale images based on newly defined spatial size distributions (SSD). The main idea consists of combining a granulometric analysis of the image with a comparison between the geometric covariograms for binary images or the auto-correlation function for gray-scale images of the original image and its granulometric transformation; the usual granulometric size distribution then arises as a particular case of this formulation. Examples are given to show that in those cases in which a finer description of the image is required, the more complex descriptors generated from the SSD could be advantageously used. It is also shown that the new descriptors are probability distributions so their intuitive interpretation and properties can be appropriately studied from the probabilistic point of view. The usefulness of these descriptors in shape analysis is illustrated by some synthetic examples and their use in texture analysis is studied by doing an experiment of texture classification on a standard texture database. A comparison is perfomed among various cases of the SSD and several former methods for texture classification in terms of percentages of correct classification and the number of features used.
We present a model where it can be optimal for rational informed speculators/arbitragers to ride ... more We present a model where it can be optimal for rational informed speculators/arbitragers to ride the bubble instead of using their information for stabilising purposes. This result stems from the interaction of speculators with behavioural traders. These latter in each period of time either discover the true fundamental value of the asset, or use a positive feedback strategy. We study the equilibrium strategy profiles of speculators in the case of short and long horizons and derive the resulting average expected excess deviation of the asset price. Further we consider the possibility of market manipulation and its consequences on the market efficiency.
Pattern Recognition, 2006
This paper presents a new algorithm for image retrieval in content-based image retrieval systems.... more This paper presents a new algorithm for image retrieval in content-based image retrieval systems. The objective of these systems is to get the images which are as similar as possible to a user query from those contained in the global image database without using textual annotations attached to the images. The main problem in obtaining a robust and effective retrieval is the gap between the low level descriptors that can be automatically extracted from the images and the user intention. The algorithm proposed here to address this problem is based on the modeling of user preferences as a probability distribution on the image space. Following a Bayesian methodology, this distribution is the prior distribution and its parameters are modified based on the information provided by the user. This yields the a posteriori from which the predictive distribution is calculated and used to show to the user a new set of images until he/she is satisfied or the target image has been found. Experimental results are shown to evaluate the method on a large image database in terms of precision and recall.
Pattern Recognition Letters, 1997
... This induces poor behaviour in low contrast areas. To minimize this problem we have experimen... more ... This induces poor behaviour in low contrast areas. To minimize this problem we have experimented with locally increasing the image constrast using histogram equalization (Gonzalez and Wintz, 1987), applied not to the whole image at once but to different subparts separately. ...
Pattern Recognition, 2007
This paper deals with the problem of image retrieval from large image databases. A particularly i... more This paper deals with the problem of image retrieval from large image databases. A particularly interesting problem is the retrieval of all images which are similar to one in the user's mind, taking into account his/her feedback which is expressed as positive or negative preferences for the images that the system progressively shows during the search. Here we present a novel algorithm for the incorporation of user preferences in an image retrieval system based exclusively on the visual content of the image, which is stored as a vector of low-level features. The algorithm considers the probability of an image belonging to the set of those sought by the user, and models the logit of this probability as the output of a generalized linear model whose inputs are the low-level image features. The image database is ranked by the output of the model and shown to the user, who selects a few positive and negative samples, repeating the process in an iterative way until he/she is satisfied. The problem of the small sample size with respect to the number of features is solved by adjusting several partial generalized linear models and combining their relevance probabilities by means of an ordered averaged weighted operator. Experiments were made with 40 users and they exhibited good performance in finding a target image (4 iterations on average) in a database of about 4700 images. The mean number of positive and negative examples is of 4 and 6 per iteration. A clustering of users into sets also shows consistent patterns of behavior.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2001
AbstractÐThis paper proposes new descriptors for binary and gray-scale images based on newly defi... more AbstractÐThis paper proposes new descriptors for binary and gray-scale images based on newly defined spatial size distributions (SSD). The main idea consists of combining a granulometric analysis of the image with a comparison between the geometric covariograms for binary images or the auto-correlation function for gray-scale images of the original image and its granulometric transformation; the usual granulometric size distribution then arises as a particular case of this formulation. Examples are given to show that in those cases in which a finer description of the image is required, the more complex descriptors generated from the SSD could be advantageously used. It is also shown that the new descriptors are probability distributions so their intuitive interpretation and properties can be appropriately studied from the probabilistic point of view. The usefulness of these descriptors in shape analysis is illustrated by some synthetic examples and their use in texture analysis is studied by doing an experiment of texture classification on a standard texture database. A comparison is perfomed among various cases of the SSD and several former methods for texture classification in terms of percentages of correct classification and the number of features used.
We present a model where it can be optimal for rational informed speculators/arbitragers to ride ... more We present a model where it can be optimal for rational informed speculators/arbitragers to ride the bubble instead of using their information for stabilising purposes. This result stems from the interaction of speculators with behavioural traders. These latter in each period of time either discover the true fundamental value of the asset, or use a positive feedback strategy. We study the equilibrium strategy profiles of speculators in the case of short and long horizons and derive the resulting average expected excess deviation of the asset price. Further we consider the possibility of market manipulation and its consequences on the market efficiency.