Vehicle Classification on Multi-Sensor Smart Cameras Using Feature- and Decision-Fusion (original) (raw)
Related papers
Audio-Visual Feature Fusion for Vehicles Classification in a Surveillance System
2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2013
In this paper we tackle the challenging problem of multimodal feature selection and fusion for vehicle categorization. Our proposed framework utilizes a boosting-based feature learning technique to learn the optimal combinations of feature modalities. New multimodal features are learned from the existing unimodal features which are initially extracted from the data acquired by a novel audiovisual sensing system under different sensing conditions (long range, moving vehicles, and various environments). Experiments on a challenging dataset collected with our long-range sensing system demonstrated that the proposed technique is robust to noise and can find the best among multiple good feature modalities from training in terms of classification performance than the feature modality selection using a sequential based technique which tends to stay on a local maxima.
Vehicle recognition and tracking using a generic multisensor and multialgorithm fusion approach
International Journal of Vehicle Autonomous Systems, 2008
This paper tackles the problem of improving the robustness of vehicle detection for Adaptive Cruise Control (ACC) applications. Our approach is based on a multisensor and a multialgorithms data fusion for vehicle detection and recognition. Our architecture combines two sensors: a frontal camera and a laser scanner. The improvement of the robustness stems from two aspects. First, we addressed the vision-based detection by developing an original approach based on fine gradient analysis, enhanced with a genetic AdaBoost-based algorithm for vehicle recognition. Then, we use the theory of evidence as a fusion framework to combine confidence levels delivered by the algorithms in order to improve the classification 'vehicle versus non-vehicle'. The final architecture of the system is very modular, generic and flexible in that it could be used for other detection applications or using other sensors or algorithms providing the same outputs. The system was successfully implemented on a prototype vehicle and was evaluated under real conditions and over various multisensor databases and various test scenarios, illustrating very good performances.
Sixth International Conference of Information Fusion, 2003. Proceedings of the, 2003
The US Army RDECOM CERDEC Night Vision & Electronic Sensors Directorate has a dynamic applied research program in sensor fusion for a wide variety of defense & defense related applications. This paper provides an overview of the ongoing research at NVESD related to fusing a mixture of active and passive sensors for countermine, dismounted & mounted soldiers, aviation and unattended ground sensor applications. Highlighted are new techniques in image registration and sensor fusion that enable the detection of moving vehicles with a network of image and acoustic sensors. A set of experiments was designed and conducted using a variety of vehicles and scenarios. The incremental value (in terms of error probability) added to the acoustic information by the image sensors (visible-and infraredwavelength cameras) is assessed in combination with the fusion techniques themselves. The approach specifically accounts for the effects of location, speed, weather, and background (acoustic, visible, and infrared). Sensor fusion for detection and classification is performed at both the sensor level and the feature level, providing a basis for making tradeoffs between performance desired and resources required. Several classifier types are examined (parametric, nonparametric, learning). The combination of their decisions is used to make the final decision.
Vehicle Detection Based on Multi-feature Clues and Dempster-Shafer Fusion Theory
Pacific-Rim Symposium on Image and Video Technology, Lecture Notes in Computer Science, 2014
On-road vehicle detection and rear-end crash prevention are demanding subjects in both academia and automotive industry. The paper focuses on monocular vision-based vehicle detection under challenging lighting conditions, being still an open topic in the area of driver assistance systems. The paper proposes an effective vehicle detection method based on multiple features analysis and Dempster-Shafer-based fusion theory. We also utilize a new idea of Adaptive Global Haar-like (AGHaar) features as a promising method for feature classification and vehicle detection in both daylight and night conditions. Validation tests and experimental results show superior detection results for day, night, rainy, and challenging conditions compared to state-of-the-art solutions.
Object perception for intelligent vehicle applications: A multi-sensor fusion approach
2014 IEEE Intelligent Vehicles Symposium Proceedings, 2014
The paper addresses the problem of object perception for intelligent vehicle applications with main tasks of detection, tracking and classification of obstacles where multiple sensors (i.e.: lidar, camera and radar) are used. New algorithms for raw sensor data processing and sensor data fusion are introduced making the most information from all sensors in order to provide a more reliable and accurate information about objects in the vehicle environment. The proposed object perception module is implemented and tested on a demonstrator car in real-life traffics and evaluation results are presented.
Vehicle type recognition using multiple-feature combinations
IS&T International Symposium on Electronic Imaging, Video Surveillance and Transportation Imaging Applications, 2016
This paper proposes a real-time vehicle tracking and type recognition system. An object tracker is recruited to detect vehicles within CCTV video footage. Subsequently, the vehicle region-of-interest within each frame are analysed using a set of features that consists of Region Features, Histogram of Oriented Gradient (HOG) and Local Binary Pattern (LBP) histogram features. Finally, a Support Vector Machine (SVM) is recruited as the classification tool to categorize vehicles into two classes: cars and vans. The proposed technique was tested on a dataset of 60 vehicles comprising of a mix of frontal/rear and angular views. Experimental results prove that the proposed technique offers a very high level of accuracy thereby promising applicability in real-life situations.
Multimodal Fusion Object Detection System
2018
In order for autonomous vehicles to safely navigate the road ways, accurate object detection must take place before safe path planning can occur. Currently, general purpose object detection CNN models have the highest detection accuracies of any method. However, there is a gap in the proposed detection frameworks. Specifically, those that provide high detection accuracy necessary for deployment but do not perform inference in realtime, and those that perform inference in realtime but detection accuracy is low. We propose Multimodal Fusion Detection System MDFS), a sensor fusion system that combines the speed of a fast image detection CNN model along with the accuracy of light detection and range (LiDAR) point cloud data through a decision tree approach . The primary objective is to bridge the trade-off between performance and accuracy. The motivation for MDFS is to reduce the computational complexity associated with using a CNN model to extract features from an image. To improve eff...
Fusion of Support Vector Machines for Classification of Multisensor Data
IEEE Transactions on Geoscience and Remote Sensing, 2007
The classification of multisensor data sets, consisting of multitemporal synthetic aperture radar data and optical imagery, is addressed. The concept is based on the decision fusion of different outputs. Each data source is treated separately and classified by a support vector machine (SVM). Instead of fusing the final classification outputs (i.e., land cover classes), the original outputs of each SVM discriminant function are used in the subsequent fusion process. This fusion is performed by another SVM, which is trained on the a priori outputs. In addition, two voting schemes are applied to create the final classification results. The results are compared with well-known parametric and nonparametric classifier methods, i.e., decision trees, the maximumlikelihood classifier, and classifier ensembles. The proposed SVM-based fusion approach outperforms all other approaches and significantly improves the results of a single SVM, which is trained on the whole multisensor data set.
An Embedded Multi-Sensor Data Fusion Design for Vehicle Perception Tasks
Journal of Communications
Nowadays, multi-sensor architectures are popular to provide a better understanding of environment perception for intelligent vehicles. Using multiple sensors to deal with perception tasks in a rich environment is a natural solution. Most of the research works have focused on PC-based implementations for perception tasks and very few concerns have been addressed for customized embedded designs. In this paper, we propose a Multi-Sensor Data Fusion (MSDF) embedded design for vehicle perception tasks using stereo camera and Light Detection and Ranging (LIDAR) sensors. A modular and scalable architecture based on Zynq-7000 SoC was designed.