Opportunistic sensing for object recognition — A unified formulation for dynamic sensor selection and feature extraction (original) (raw)
Related papers
Information theoretic sensor data selection for active object recognition and state estimation
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2002
We introduce a formalism for optimal sensor parameter selection for iterative state estimation in static systems. Our optimality criterion is the reduction of uncertainty in the state estimation process, rather than an estimator-specific metric (e.g., minimum mean squared estimate error). The claim is that state estimation becomes more reliable if the uncertainty and ambiguity in the estimation process can be reduced. We use Shannon's information theory to select information-gathering actions that maximize mutual information, thus optimizing the information that the data conveys about the true state of the system. The technique explicitly takes into account the a priori probabilities governing the computation of the mutual information. Thus, a sequential decision process can be formed by treating the a priori probability at a certain time step in the decision process as the a posteriori probability of the previous time step. We demonstrate the benefits of our approach in an object recognition application using an active camera for sequential gaze control and viewpoint selection. We describe experiments with discrete and continuous density representations that suggest the effectiveness of the approach.
Fast object detection with entropy-driven evaluation
Cascade-style approaches to implementing ensemble classifiers can deliver significant speed-ups at test time. While highly effective, they remain challenging to tune and their overall performance depends on the availability of large validation sets to estimate rejection thresholds. These characteristics are often prohibitive and thus limit their applicability.
On feature extraction by mutual information maximization
IEEE International Conference on Acoustics Speech and Signal Processing, 2002
In order to learn discriminative feature transforms, we discuss mutual information between class labels and transformed features as a criterion. Instead of Shannon's definition we use measures based on Renyi entropy, which lends itself into an efficient implementation and an interpretation of "information potentials" and "information forces" induced by samples of data. This paper presents two routes towards practical usability of the method, especially aimed to large databases: The first is an on-line stochastic gradient algorithm, and the second is based on approximating class densities in the output space by Gaussian mixture models.
Information Gain in Object Recognition Via Sensor Fusion
We have been studying information theoretic measures, entropy and mutual information, as performance metrics on the information gain given a standard suite of sensors. Object pose is described by a single angle of rotation using a Lie group parameterization; observations are generated using CAD models for the targets of interest and simulators. Variability in the data due to the sensor by which the scene is remotely observed is statistically characterized via the data likelihood function. Given observations from multiple sensors, data fusion is automatic in the posterior density. We consider the mutual information between the target pose and remote observation as a performance measure in the pose estimation context. We have quantitatively examined the additional information gains due to sensor fusion. Furthermore, we relate the information theoretic performance measures with probability of error in the pose estimation problem via Fano's classic inequality.
Objective priors from maximum entropy in data classification
Information Fusion, 2013
Lack of knowledge of the prior distribution in classification problems that operate on small data sets may make the application of Bayes’ rule questionable. Uniform or arbitrary priors may provide classification answers that, even in simple examples, may end up contradicting our common sense about the problem. Entropic priors (EPs), via application of the maximum entropy (ME) principle, seem to provide good objective answers in practical cases leading to more conservative Bayesian inferences. EP are derived and applied to classification tasks when only the likelihood functions are available. In this paper, when inference is based only on one sample, we review the use of the EP also in comparison to priors that are obtained from maximization of the mutual information between observations and classes. This last criterion coincides with the maximization of the KL divergence between posteriors and priors that for large sample sets leads to the well-known reference (or Bernardo’s) priors. Our comparison on single samples considers both approaches in prospective and clarifies differences and potentials. A combinatorial justification for EP, inspired by Wallis’ combinatorial argument for entropy definition, is also included. The application of the EP to sequences (multiple samples) that may be affected by excessive domination of the class with the maximum entropy is also considered with a solution that guarantees posterior consistency. An explicit iterative algorithm is proposed for EP determination solely from knowledge of the likelihood functions. Simulations that compare EP with uniform priors on short sequences are also included.
Vehicle Classification on Multi-Sensor Smart Cameras Using Feature- and Decision-Fusion
2007 First ACM/IEEE International Conference on Distributed Smart Cameras, 2007
In the proposed project we are working towards multi-sensor smart cameras, i.e., we augment vision-based cameras by additional sensors such as infrared and audio and, thus, transform a single smart camera into an embedded multi-sensor node. Our software framework for embedded online data fusion, called I-SENSE, which supports data fusion on different levels of data abstraction is presented. Further our fusion model is presented with the focus set on four main parts, namely (i) the acoustic and visual feature extraction, (ii) feature based data fusion and the feature selection algorithm, (iii) feature based decision modeling based on Support Vector Machines (SVM) and (iv) decision modeling based on a modified Dempster-Shafer approach is discussed. Finally we demonstrate the feasibility of our multilevel data fusion approach with experimental results of our "vehicle classification" case study.
EURASIP Journal on Image and Video Processing
Human activity monitoring in the video sequences is an intriguing computer vision domain which incorporates colossal applications, e.g., surveillance systems, human-computer interaction, and traffic control systems. In this research, our primary focus is in proposing a hybrid strategy for efficient classification of human activities from a given video sequence. The proposed method integrates four major steps: (a) segment the moving objects by fusing novel uniform segmentation and expectation maximization, (b) extract a new set of fused features using local binary patterns with histogram oriented gradient and Harlick features, (c) feature selection by novel Euclidean distance and joint entropy-PCA-based method, and (d) feature classification using multi-class support vector machine. The three benchmark datasets (MIT, CAVIAR, and BMW-10) are used for training the classifier for human classification; and for testing, we utilized multi-camera pedestrian videos along with MSR Action dataset, INRIA, and CASIA dataset. Additionally, the results are also validated using dataset recorded by our research group. For action recognition, four publicly available datasets are selected such as Weizmann, KTH, UIUC, and Muhavi to achieve recognition rates of 95.80, 99.30, 99, and 99.40%, respectively, which confirm the authenticity of our proposed work. Promising results are achieved in terms of greater precision compared to existing techniques.
Feature Extraction by Non-Parametric Mutual Information Maximization
The Journal of Machine Learning Research, 2003
We present a method for learning discriminative feature transforms using as criterion the mutual information between class labels and transformed features. Instead of a commonly used mutual information measure based on Kullback-Leibler divergence, we use a quadratic divergence measure, which allows us to make an efficient non-parametric implementation and requires no prior assumptions about class densities. In addition to linear transforms, we also discuss nonlinear transforms that are implemented as radial basis function networks. Extensions to reduce the computational complexity are also presented, and a comparison to greedy feature selection is made.
Object Recognition Using Local Information Content
2004
Object identification from local information has recently been investigated with respect to its potential for robust recognition, e.g., in case of partial object occlusions, scale variation, noise, and background clutter in detection tasks. This work contributes to this research by a thorough analysis of the discriminative power of local appearance patterns and by proposing to exploit local information content for object representation and recognition. In a first processing stage, we localize discriminative regions in the object views from a posterior entropy measure, and then derive object models from selected discriminative local patterns. Object recognition is then applied to test patterns with associated low entropy using an efficient voting process. The method is evaluated by various degrees of partial occlusion and Gaussian image noise, resulting in highly robust recognition even in the presence of severe occlusion effects.