SAFER vehicle inspection: a multimodal robotic sensing platform (original) (raw)
Related papers
SAFER vehicle inspection: a multimodal robotic sensing platform
Unmanned Ground Vehicle Technology VI, 2004
The current threats to U.S. security both military and civilian have led to an increased interest in the development of technologies to safeguard national facilities such as military bases, federal buildings, nuclear power plants, and national laboratories. As a result, the Imaging, Robotics, and Intelligent Systems (IRIS) Laboratory at The University of Tennessee (UT) has established a research consortium, known as SAFER (Security Automation and Future Electromotive Robotics), to develop, test, and deploy sensing and imaging systems for unmanned ground vehicles (UGV). The targeted missions for these UGV systems include-but are not limited to-under vehicle threat assessment, stand-off check-point inspections, scout surveillance, intruder detection, obstaclebreach situations, and render-safe scenarios. This paper presents a general overview of the SAFER project. Beyond this general overview, we further focus on a specific problem where we collect 3D range scans of under vehicle carriages. These scans require appropriate segmentation and representation algorithms to facilitate the vehicle inspection process. We discuss the theory for these algorithms and present results from applying them to actual vehicle scans.
Under Vehicle Inspection with 3d Imaging
Computational Imaging and Vision
This research is motivated towards the deployment of intelligent robots for under vehicle inspection at checkpoints , gate-entry terminals and parking lots. Using multi-modality measurements of temperature, range, color, radioactivity and with future potential for chemical and biological sensors, our approach is based on a modular robotic "sensor brick" architecture that integrates multisensor data into scene intelligence in 3D virtual reality environments. The remote 3D scene visualization capability reduces the risk on close-range inspection personnel, transforming the inspection task into an unmanned robotic mission. Our goal in this chapter is to focus on the 3D range "sensor brick" as a vital component in this multi-sensor robotics framework and demonstrate the potential of automatic threat detection using the geometric information from the 3D sensors. With the 3D data alone, we propose two different approaches for the detection of anomalous objects as potential threats. The first approach is to perform scene verification using a 3D registration algorithm for quickly and efficiently finding potential changes to the undercarriage by comparing previously archived scans of the same vehicle. The second 3D shape analysis approach assumes availability of the CAD models of the undercarriage that can be matched with the scanned real data using a novel perceptual curvature variation measure (CVM). The definition of the CVM, that can be understood as the entropy of surface curvature, describes the under vehicle scene as a graph network of smooth surface patches that readily lends to matching with the graph description of the apriori CAD data. By presenting results of real-time acquisition, visualization, scene verification and description, we emphasize the scope of 3D imaging over several drawbacks with present day inspection systems using mirrors and 2D cameras.
Robotic three-dimensional imaging system for under-vehicle inspection
Journal of Electronic Imaging, 2006
We present our research efforts toward the deployment of 3-D sensing technology to an under-vehicle inspection robot. The 3-D sensing modality provides flexibility with ambient lighting and illumination in addition to the ease of visualization, mobility, and increased confidence toward inspection. We leverage laser-based range-imaging techniques to reconstruct the scene of interest and address various design challenges in the scene modeling pipeline. On these 3-D mesh models, we propose a curvature-based surface feature toward the interpretation of the reconstructed 3-D geometry. The curvature variation measure (CVM) that we define as the entropic measure of curvature quantifies surface complexity indicative of the information present in the surface. We are able to segment the digitized mesh models into smooth patches and represent the automotive scene as a graph network of patches. The CVM at the nodes of the graph describes the surface patch. We demonstrate the descriptiveness of the CVM on manufacturer CAD and laserscanned models.
The Use of Terrestrial and Maritime Autonomous Vehicles in Nonintrusive Object Inspection
Sensors
Traditional nonintrusive object inspection methods are complex or extremely expensive to apply in certain cases, such as inspection of enormous objects, underwater or maritime inspection, an unobtrusive inspection of a crowded place, etc. With the latest advances in robotics, autonomous self-driving vehicles could be applied for this task. The present study is devoted to a review of the existing and novel technologies and methods of using autonomous self-driving vehicles for nonintrusive object inspection. Both terrestrial and maritime self-driving vehicles, their typical construction, sets of sensors, and software algorithms used for implementing self-driving motion were analyzed. The standard types of sensors used for nonintrusive object inspection in security checks at the control points, which could be successfully implemented at self-driving vehicles, along with typical areas of implementation of such vehicles, were reviewed, analyzed, and classified.
SAFER under vehicle inspection through video mosaic building
Industrial Robot: An International Journal, 2004
The current threats to US security, both military and civilian, have led to an increased interest in the development of technologies to safeguard national facilities such as military bases, federal buildings, nuclear power plants, and national laboratories. As a result, the imaging, robotics, and intelligent systems (IRIS) laboratory at the University of Tennessee has established a research consortium, known as security automation and future electromotive robotics (SAFER), to develop, test, and deploy sensing and imaging systems. In this paper, we describe efforts made to build multi-perspective mosaics of infrared and color video data for the purpose of under vehicle inspection. It is desired to create a large, high-resolution mosaic that may be used to quickly visualize the entire scene shot by a camera making a single pass underneath the vehicle. Several constraints are placed on the video data in order to facilitate the assumption that the entire scene in the sequence exists on a single plane. Therefore, a single mosaic is used to represent a single video sequence.
Research article SAFER under vehicle inspection through video mosaic building
2004
The current threats to US security, both military and civilian, have led to an increased interest in the development of technologies to safeguard national facilities such as military bases, federal buildings, nuclear power plants, and national laboratories. As a result, the imaging, robotics, and intelligent systems (IRIS) laboratory at the University of Tennessee has established a research consortium, known as security automation and future electromotive robotics (SAFER), to develop, test, and deploy sensing and imaging systems. In this paper, we describe efforts made to build multi-perspective mosaics of infrared and color video data for the purpose of under vehicle inspection. It is desired to create a large, high-resolution mosaic that may be used to quickly visualize the entire scene shot by a camera making a single pass underneath the vehicle. Several constraints are placed on the video data in order to facilitate the assumption that the entire scene in the sequence exists on a single plane. Therefore, a single mosaic is used to represent a single video sequence.
Sensors, 2022
One of the primary tasks undertaken by autonomous vehicles (AVs) is object detection, which comes ahead of object tracking, trajectory estimation, and collision avoidance. Vulnerable road objects (e.g., pedestrians, cyclists, etc.) pose a greater challenge to the reliability of object detection operations due to their continuously changing behavior. The majority of commercially available AVs, and research into them, depends on employing expensive sensors. However, this hinders the development of further research on the operations of AVs. In this paper, therefore, we focus on the use of a lower-cost single-beam LiDAR in addition to a monocular camera to achieve multiple 3D vulnerable object detection in real driving scenarios, all the while maintaining real-time performance. This research also addresses the problems faced during object detection, such as the complex interaction between objects where occlusion and truncation occur, and the dynamic changes in the perspective and scale ...
Surface shape description of 3D data from under vehicle inspection robot
Unmanned Ground Vehicle Technology VII, 2005
Our research efforts focus on the deployment of 3D sensing capabilities to a multi-modal under vehicle inspection robot. In this paper, we outline the various design challenges towards the automation of the 3D scene modeling task. We employ laser-based range imaging techniques to extract the geometry of a vehicle's undercarriage and present our results after range integration. We perform shape analysis on the digitized triangle mesh models by segmenting them into smooth surface patches based on the curvedness of the surface. Using a region-growing procedure, we then obtain the patch adjacency. On each of these patches, we apply our definition of the curvature variation measure (CVM) as a descriptor of surface shape complexity. We base the information-theoretic CVM on shape curvature, and extract shape information as the entropic measure of curvature to represent a component as a graph network of patches. The CVM at the nodes of the graph describe the surface patch. We then demonstrate our algorithm with results on automotive components. With apriori manufacturer information about the CAD models in the undercarriage we approach the technical challenge of threat detection with our surface shape description algorithm on the laser scanned geometry.
Isarc Proceedings, 2003
Equipping engineering machines and trajectory vehicles with automatic and remote control systems is necessary in case there is any environmental hazard for an operator i.e. environment conditions, an impact from fire field etc. It is particularly crucial when there is no possibility for any person to be in a machine or in its close surrounding. It applies to both extreme environment conditions (high temperature, pressure or environment contamination) and the likeliness of direct man health and life hazard (i.e. removal, disposal or neutralization of hazardous materials or area demining-particularly when the enemy operates on a fire field). In all above cases of using engineering machines and trajectory vehicles there is a need to remote control without the possibility of using direct impulse feedback by operators. That is the reason why it is so necessary to work on elaboration vision system enabling determination of the machine location in geodesic system and operating accessories configuration. This paper describes an overall characteristics of the steering unit in an unmanned ground vehicle depending on tasks to be performed, considering the drive and the steering's system structure required. Current development of visual systems used in unmanned ground vehicles has been shown. A visual system to be used in remote controlled machines and ground vehicles has been presented. Its structure and limitations within picture depth estimation resulting from the system's structure have been presented. Furthermore the system of defining the cameras' location in an external reference system has been described.
3D range imaging for urban search and rescue robotics research
IEEE International Workshop on Safety, Security, and Rescue Robotics, 2005
Urban search and rescue (USAR) operations can be extremely dangerous for human rescuers during disaster response. Human task forces carrying necessary tools and equipment and having the required skills and techniques, are deployed for the rescue of victims of structural collapse. Instead of sending human rescuers into such dangerous structures, it is hoped that robots will one day meet the requirements to perform such tasks so that rescuers are not at risk of being hurt or worse. Recently, the National Institute of Standards and Technology, sponsored by the Defense Advanced Research Projects Agency, created reference test arenas that simulate collapsed structures for evaluating the performance of autonomous mobile robots performing USAR tasks. At the same time, the NIST Industrial Autonomous Vehicles Project has been studying advanced 3D range sensors for improved robot safety in warehouses and manufacturing environments. Combined applications are discussed in this paper where advanced 3D range sensors also show promise during USAR operations toward improved robot performance in collapsed structure navigation and rescue operations.