Cluster versus GPU implementation of an Orthogonal Target Detection Algorithm for Remotely Sensed Hyperspectral Images (original) (raw)

Clusters versus GPUs for Parallel Target and Anomaly Detection in Hyperspectral Images

EURASIP Journal on Advances in Signal Processing, 2010

Remotely sensed hyperspectral sensors provide image data containing rich information in both the spatial and the spectral domain, and this information can be used to address detection tasks in many applications. In many surveillance applications, the size of the objects (targets) searched for constitutes a very small fraction of the total search area and the spectral signatures associated to the targets are generally different from those of the background, hence the targets can be seen as anomalies. In hyperspectral imaging, many algorithms have been proposed for automatic target and anomaly detection. Given the dimensionality of hyperspectral scenes, these techniques can be time-consuming and difficult to apply in applications requiring real-time performance. In this paper, we develop several new parallel implementations of automatic target and anomaly detection algorithms. The proposed parallel algorithms are quantitatively evaluated using hyperspectral data collected by the NASA's Airborne Visible Infra-Red Imaging Spectrometer (AVIRIS) system over theWorld Trade Center (WTC) in New York, five days after the terrorist attacks that collapsed the two main towers in theWTC complex.

GPU implementation of target and anomaly detection algorithms for remotely sensed hyperspectral image analysis

Satellite Data Compression, Communications, and Processing VI, 2010

Automatic target and anomaly detection are considered very important tasks for hyperspectral data exploitation. These techniques are now routinely applied in many application domains, including defence and intelligence, public safety, precision agriculture, geology, or forestry. Many of these applications require timely responses for swift decisions which depend upon high computing performance of algorithm analysis. However, with the recent explosion in the amount and dimensionality of hyperspectral imagery, this problem calls for the incorporation of parallel computing techniques. In the past, clusters of computers have offered an attractive solution for fast anomaly and target detection in hyperspectral data sets already transmitted to Earth. However, these systems are expensive and difficult to adapt to on-board data processing scenarios, in which low-weight and low-power integrated components are essential to reduce mission payload and obtain analysis results in (near) real-time, i.e., at the same time as the data is collected by the sensor. An exciting new development in the field of commodity computing is the emergence of commodity graphics processing units (GPUs), which can now bridge the gap towards on-board processing of remotely sensed hyperspectral data. In this paper, we describe several new GPU-based implementations of target and anomaly detection algorithms for hyperspectral data exploitation. The parallel algorithms are implemented on latest-generation Tesla C1060 GPU architectures, and quantitatively evaluated using hyperspectral data collected by NASA's AVIRIS system over the World Trade Center (WTC) in New York, five days after the terrorist attacks that collapsed the two main towers in the WTC complex.

Comparative analysis of different implementations of a parallel algorithm for automatic target detection and classification of hyperspectral images

2009

Automatic target detection in hyperspectral images is a task that has attracted a lot of attention recently. In the last few years, several algoritms have been developed for this purpose, including the well-known RX algorithm for anomaly detection, or the automatic target detection and classification algorithm (ATDCA), which uses an orthogonal subspace projection (OSP) approach to extract a set of spectrally distinct targets automatically from the input hyperspectral data. Depending on the complexity and dimensionality of the analyzed image scene, the target/anomaly detection process may be computationally very expensive, a fact that limits the possibility of utilizing this process in time-critical applications. In this paper, we develop computationally efficient parallel versions of both the RX and ATDCA algorithms for near real-time exploitation of these algorithms. In the case of ATGP, we use several distance metrics in addition to the OSP approach. The parallel versions are quantitatively compared in terms of target detection accuracy, using hyperspectral data collected by NASA's Airborne Visible Infra-Red Imaging Spectrometer (AVIRIS) over the World Trade Center in New York, five days after the terrorist attack of September 11th, 2001, and also in terms of parallel performance, using a massively Beowulf cluster available at NASA's Goddard Space Flight Center in Maryland.

Parallel Implementation of Target and Anomaly Detection Algorithms for Hyperspectral Imagery

IGARSS 2008 - 2008 IEEE International Geoscience and Remote Sensing Symposium, 2008

This paper develops several parallel algorithms for target detection in hyperspectral imagery, considered to be a crucial goal in many remote sensing applications. In order to illustrate parallel performance of the proposed parallel algorithms, we consider a massively parallel Beowulf cluster at NASA's Goddard Space Flight Center. Experimental results, collected by the AVIRIS sensor over the World Trade Center, just five days after the terrorist attacks, indicate that commodity cluster computers can be used as a viable tool to increase computational performance of hyperspectral target detection applications.

Evaluation of the graphics processing unit architecture for the implementation of target detection algorithms for hyperspectral imagery

Journal of Applied Remote Sensing, 2012

Recent advances in hyperspectral imaging sensors allow the acquisition of images of a scene at hundreds of contiguous narrow spectral bands. Target detection algorithms try to exploit this high-resolution spectral information to detect target materials present in a scene, but this process may be computationally intensive due to the large data volumes generated by the hyperspectral sensors, typically hundreds of megabytes. Previous works have shown that hyperspectral data processing can significantly benefit from the parallel computing resources of graphics processing units (GPUs), due to their highly parallel structure and the high computational capabilities that can be achieved at relative low costs. We studied the parallel implementation of three target detection algorithms (RX algorithm, matched filter, and adaptive matched subspace detector) for hyperspectral images in order to identify the aspects in the structure of these algorithms that can exploit the CUDA™ architecture of NVIDIA ® GPUs. A data set was generated using a SOC-700 hyperspectral imager to evaluate the performance and detection accuracy of the parallel implementations on a NVIDIA ® Tesla™ C1060 graphics card, achieving real-time performance in the GPU implementations based on global statistics.

GPU Implementation of an Automatic Target Detection and Classification Algorithm for Hyperspectral Image Analysis

IEEE Geoscience and Remote Sensing Letters, 2013

The detection of (moving or static) targets in remotely sensed hyperspectral images often requires real-time responses for swift decisions that depend upon high computing performance of algorithm analysis. The automatic target detection and classification algorithm (ATDCA) has been widely used for this purpose. In this letter, we develop several optimizations for accelerating the computational performance of ATDCA. The first one focuses on the use of the Gram-Schmidt orthogonalization method instead of the orthogonal projection process adopted by the classic algorithm. The second one is focused on the development of a new implementation of the algorithm on commodity graphics processing units (GPUs). The proposed GPU implementation properly exploits the GPU architecture at low level, including shared memory, and provides coalesced accesses to memory that lead to very significant speedup factors, thus taking full advantage of the computational power of GPUs. The GPU implementation is specifically tailored to hyperspectral imagery and the special characteristics of this kind of data, achieving real-time performance of ATDCA for the first time in the literature. The proposed optimizations are evaluated not only in terms of target detection accuracy but also in terms of computational performance using two different GPU architectures by NVIDIA: Tesla C1060 and GeForce GTX 580, taking advantage of the performance of operations in single-precision floating point. Experiments are conducted using hyperspectral data sets collected by three different hyperspectral imaging instruments. These results reveal considerable acceleration factors while retaining the same target detection accuracy for the algorithm. Index Terms-Automatic target detection and classification algorithm (ATDCA), commodity graphics processing units (GPUs), Gram-Schmidt (GS) orthogonalization, hyperspectral imaging.

Parallel Implementation of Hyperspectral Image Processing Algorithms

2006

High computing performance of algorithm analysis is essential in many hyperspectral imaging applications, including automatic target recognition for homeland defense and security, risk/hazard prevention and monitoring, wild-land fire tracking and biological threat detection. Despite the growing interest in hyperspectral imaging research, only a few efforts devoted to designing and implementing well-conformed parallel processing solutions currently exist in the open literature. With the recent explosion in the amount and dimensionality of hyperspectral imagery, parallel processing is expected to become a requirement in most remote sensing missions. In this paper, we take a necessary first step towards the quantitative comparison of parallel techniques and strategies for analyzing hyperspectral data sets. Our focus is on three types of algorithms: automatic target recognition, spectral mixture analysis and data compression. Three types of high performance computing platforms are used for demonstration purposes, including commodity cluster-based systems, heterogeneous networks of distributed workstations and hardware-based computer architectures. Combined, these parts deliver a snapshot of the state of the art in those areas, and offer a thoughtful perspective on the potential and emerging challenges of incorporating parallel computing models into hyperspectral remote sensing problems.

Commodity cluster and hardware-based massively parallel implementations of hyperspectral imaging algorithms

2006

The incorporation of hyperspectral sensors aboard airborne/satellite platforms is currently producing a nearly continual stream of multidimensional image data, and this high data volume has soon introduced new processing challenges. The price paid for the wealth spatial and spectral information available from hyperspectral sensors is the enormous amounts of data that they generate. Several applications exist, however, where having the desired information calculated quickly enough for practical use is highly desirable. High computing performance of algorithm analysis is particularly important in homeland defense and security applications, in which swift decisions often involve detection of (sub-pixel) military targets (including hostile weaponry, camouflage, concealment, and decoys) or chemical/biological agents. In order to speed-up computational performance of hyperspectral imaging algorithms, this paper develops several fast parallel data processing techniques. Techniques include four classes of algorithms: (1) unsupervised classification, (2) spectral unmixing, and (3) automatic target recognition, and (4) onboard data compression. A massively parallel Beowulf cluster (Thunderhead) at NASA's Goddard Space Flight Center in Maryland is used to measure parallel performance of the proposed algorithms. In order to explore the viability of developing onboard, real-time hyperspectral data compression algorithms, a Xilinx Virtex-II field programmable gate array (FPGA) is also used in experiments. Our quantitative and comparative assessment of parallel techniques and strategies may help image analysts in selection of parallel hyperspectral algorithms for specific applications.

On the Use of Cluster Computing Architectures for Implementation of Hyperspectral Image Analysis Algorithms

2005

Hyperspectral sensors represent the most advanced instruments currently available for remote sensing of the Earth. The high spatial and spectral resolution of the images supplied by systems like the airborne visible infra-red imaging spectrometer (AVIRIS), developed by NASA Jet Propulsion Laboratory, allows their exploitation in diverse applications, such as detection and control of wild fires and hazardous agents in water and atmosphere, detection of military targets and management of natural resources. Even though the above applications require a response in real time, few solutions are available to provide fast and efficient analysis of these types of data. This is mainly caused by the dimensionality of hyperspectral images, which limits their exploitation in analysis scenarios where the spatial and temporal requirements are very high. In the present work, we describe a new parallel methodology which deals with most of the previously addressed problems. The computational performance of the proposed analysis methodology is evaluated using two parallel computer systems, a SGI Origin 2000 shared memory system located at the European Center of Parallelism of Barcelona, and the Thunderhead Beowulf cluster at NASA's Goddard Space Flight Center.

Parallel Detection of Targets in Hyperspectral Images Using Heterogeneous Networks of Workstations

15th EUROMICRO International Conference on Parallel, Distributed and Network-Based Processing (PDP'07), 2007

Heterogeneous networks of workstations have rapidly become a cost-effective computing solution in many application areas. This paper develops several highly innovative parallel algorithms for target detection in hyperspectral imagery, considered to be a crucial goal in remote sensing-based homeland security and defense applications. In order to illustrate parallel performance, we consider four (partially and fully) heterogeneous networks of workstations distributed among different locations at University of Maryland, and also a massively parallel Beowulf cluster at NASA's Goddard Space Flight Center. Experimental results indicate that heterogeneous networks can be used as a viable low-cost alternative to homogeneous parallel systems in many on-going and planned remote sensing missions.