Comparative analysis of different implementations of a parallel algorithm for automatic target detection and classification of hyperspectral images (original) (raw)
Related papers
Parallel Implementation of Target and Anomaly Detection Algorithms for Hyperspectral Imagery
IGARSS 2008 - 2008 IEEE International Geoscience and Remote Sensing Symposium, 2008
This paper develops several parallel algorithms for target detection in hyperspectral imagery, considered to be a crucial goal in many remote sensing applications. In order to illustrate parallel performance of the proposed parallel algorithms, we consider a massively parallel Beowulf cluster at NASA's Goddard Space Flight Center. Experimental results, collected by the AVIRIS sensor over the World Trade Center, just five days after the terrorist attacks, indicate that commodity cluster computers can be used as a viable tool to increase computational performance of hyperspectral target detection applications.
2010 IEEE International Conference on Cluster Computing, 2010
Remotely sensed hyperspectral imaging instruments provide high-dimensional data containing rich information in both the spatial and the spectral domain. In many surveillance applications, detecting objects (targets) is a very important task. In particular, algorithms for detecting (moving or static) targets, or targets that could expand their size (such as propagating fires) often require timely responses for swift decisions that depend upon high computing performance of algorithm analysis. In this paper, we develop parallel versions of a target detection algorithm based on orthogonal subspace projections. The parallel implementations are tested in two types of parallel computing architectures: a massively parallel cluster of computers called Thunderhead and available at NASAs Goddard Space Flight Center in Maryland, and a commodity graphics processing unit (GPU) of NVidia TM GeForce GTX 275 type. While the clusterbased implementation reveals itself as appealing for information extraction from remote sensing data already transmitted to Earth, the GPU implementation allows us to perform near realtime anomaly detection in hyperspectral scenes, with speedups over 50x with regards to a highly optimized serial version. The proposed parallel algorithms are quantitatively evaluated using hyperspectral data collected by the NASAs Airborne Visible Infra-Red Imaging Spectrometer (AVIRIS) system over the World Trade Center (WTC) in New York, five days after the attacks that collapsed the two main towers in the WTC complex.
Clusters versus GPUs for Parallel Target and Anomaly Detection in Hyperspectral Images
EURASIP Journal on Advances in Signal Processing, 2010
Remotely sensed hyperspectral sensors provide image data containing rich information in both the spatial and the spectral domain, and this information can be used to address detection tasks in many applications. In many surveillance applications, the size of the objects (targets) searched for constitutes a very small fraction of the total search area and the spectral signatures associated to the targets are generally different from those of the background, hence the targets can be seen as anomalies. In hyperspectral imaging, many algorithms have been proposed for automatic target and anomaly detection. Given the dimensionality of hyperspectral scenes, these techniques can be time-consuming and difficult to apply in applications requiring real-time performance. In this paper, we develop several new parallel implementations of automatic target and anomaly detection algorithms. The proposed parallel algorithms are quantitatively evaluated using hyperspectral data collected by the NASA's Airborne Visible Infra-Red Imaging Spectrometer (AVIRIS) system over theWorld Trade Center (WTC) in New York, five days after the terrorist attacks that collapsed the two main towers in theWTC complex.
Satellite Data Compression, Communications, and Processing VI, 2010
Automatic target and anomaly detection are considered very important tasks for hyperspectral data exploitation. These techniques are now routinely applied in many application domains, including defence and intelligence, public safety, precision agriculture, geology, or forestry. Many of these applications require timely responses for swift decisions which depend upon high computing performance of algorithm analysis. However, with the recent explosion in the amount and dimensionality of hyperspectral imagery, this problem calls for the incorporation of parallel computing techniques. In the past, clusters of computers have offered an attractive solution for fast anomaly and target detection in hyperspectral data sets already transmitted to Earth. However, these systems are expensive and difficult to adapt to on-board data processing scenarios, in which low-weight and low-power integrated components are essential to reduce mission payload and obtain analysis results in (near) real-time, i.e., at the same time as the data is collected by the sensor. An exciting new development in the field of commodity computing is the emergence of commodity graphics processing units (GPUs), which can now bridge the gap towards on-board processing of remotely sensed hyperspectral data. In this paper, we describe several new GPU-based implementations of target and anomaly detection algorithms for hyperspectral data exploitation. The parallel algorithms are implemented on latest-generation Tesla C1060 GPU architectures, and quantitatively evaluated using hyperspectral data collected by NASA's AVIRIS system over the World Trade Center (WTC) in New York, five days after the terrorist attacks that collapsed the two main towers in the WTC complex.
Parallel Implementation of Hyperspectral Image Processing Algorithms
2006
High computing performance of algorithm analysis is essential in many hyperspectral imaging applications, including automatic target recognition for homeland defense and security, risk/hazard prevention and monitoring, wild-land fire tracking and biological threat detection. Despite the growing interest in hyperspectral imaging research, only a few efforts devoted to designing and implementing well-conformed parallel processing solutions currently exist in the open literature. With the recent explosion in the amount and dimensionality of hyperspectral imagery, parallel processing is expected to become a requirement in most remote sensing missions. In this paper, we take a necessary first step towards the quantitative comparison of parallel techniques and strategies for analyzing hyperspectral data sets. Our focus is on three types of algorithms: automatic target recognition, spectral mixture analysis and data compression. Three types of high performance computing platforms are used for demonstration purposes, including commodity cluster-based systems, heterogeneous networks of distributed workstations and hardware-based computer architectures. Combined, these parts deliver a snapshot of the state of the art in those areas, and offer a thoughtful perspective on the potential and emerging challenges of incorporating parallel computing models into hyperspectral remote sensing problems.
2006
The incorporation of hyperspectral sensors aboard airborne/satellite platforms is currently producing a nearly continual stream of multidimensional image data, and this high data volume has soon introduced new processing challenges. The price paid for the wealth spatial and spectral information available from hyperspectral sensors is the enormous amounts of data that they generate. Several applications exist, however, where having the desired information calculated quickly enough for practical use is highly desirable. High computing performance of algorithm analysis is particularly important in homeland defense and security applications, in which swift decisions often involve detection of (sub-pixel) military targets (including hostile weaponry, camouflage, concealment, and decoys) or chemical/biological agents. In order to speed-up computational performance of hyperspectral imaging algorithms, this paper develops several fast parallel data processing techniques. Techniques include four classes of algorithms: (1) unsupervised classification, (2) spectral unmixing, and (3) automatic target recognition, and (4) onboard data compression. A massively parallel Beowulf cluster (Thunderhead) at NASA's Goddard Space Flight Center in Maryland is used to measure parallel performance of the proposed algorithms. In order to explore the viability of developing onboard, real-time hyperspectral data compression algorithms, a Xilinx Virtex-II field programmable gate array (FPGA) is also used in experiments. Our quantitative and comparative assessment of parallel techniques and strategies may help image analysts in selection of parallel hyperspectral algorithms for specific applications.
IEEE Geoscience and Remote Sensing Letters, 2013
The detection of (moving or static) targets in remotely sensed hyperspectral images often requires real-time responses for swift decisions that depend upon high computing performance of algorithm analysis. The automatic target detection and classification algorithm (ATDCA) has been widely used for this purpose. In this letter, we develop several optimizations for accelerating the computational performance of ATDCA. The first one focuses on the use of the Gram-Schmidt orthogonalization method instead of the orthogonal projection process adopted by the classic algorithm. The second one is focused on the development of a new implementation of the algorithm on commodity graphics processing units (GPUs). The proposed GPU implementation properly exploits the GPU architecture at low level, including shared memory, and provides coalesced accesses to memory that lead to very significant speedup factors, thus taking full advantage of the computational power of GPUs. The GPU implementation is specifically tailored to hyperspectral imagery and the special characteristics of this kind of data, achieving real-time performance of ATDCA for the first time in the literature. The proposed optimizations are evaluated not only in terms of target detection accuracy but also in terms of computational performance using two different GPU architectures by NVIDIA: Tesla C1060 and GeForce GTX 580, taking advantage of the performance of operations in single-precision floating point. Experiments are conducted using hyperspectral data sets collected by three different hyperspectral imaging instruments. These results reveal considerable acceleration factors while retaining the same target detection accuracy for the algorithm. Index Terms-Automatic target detection and classification algorithm (ATDCA), commodity graphics processing units (GPUs), Gram-Schmidt (GS) orthogonalization, hyperspectral imaging.
Fast Anomaly Detection Algorithms For Hyperspectral Images
J. Multidisciplinary Engineering Science and Technology, 2015
Hyperspectral images have been used in anomaly and change detection applications such as search and rescue operations where it is critical to have fast detection. However, conventional Reed-Xiaoli (RX) algorithm [6] took about 600 seconds using a PC to finish the processing of an 800x1024 hyperspectral image with 10 bands. This is not acceptable for real-time applications. A more recent algorithm known as kernel RX (KRX) [7] achieves better detection performance than RX at the expense of computational cost. For example, for the same 800x1024 image with 10 bands, KRX took 15 hours to finish the processing. In this paper, we present a general framework for fast anomaly detection using RX and KRX algorithms. First, a fast data reduction scheme using Principal Component Analysis (PCA) is proposed. This method takes less than 1 second to finish and the performance degradation is minimal. Second, we propose several speed boosting options in the RX and KRX algorithms. These options include image sub-sampling, the use of block pixels, and background pixel sub-sampling. Actual hyperspectral image has been used in our studies. Receiver operating characteristics (ROC) curves and actual computation times were used to compare the various options. For the 800x1024x10 image, we were able to improve the speed by more than 220 times for RX and 700 times for KRX with minimal degradation in detection performance.
Anomaly Detection Algorithms for Hyperspectral Imagery
2004
Nowadays the use of hyperspectral imagery specifically automatic target detection algorithms for these images is a relatively exciting area of research. An important challenge of hyperspectral target detection is to detect small targets without any prior knowledge, particularly when the interested targets are insignificant with low probabilities of occurrence. The specific characteristic of anomaly detection is that it does not require atmospheric correction and signature libraries. Recently, several useful applications of anomaly detection approaches have been developed in remote sensing. With this in mind, in this paper some anomaly detectors such as RX-based anomaly detectors (MRX, NRX,CRX,RX-UTD), as well as adaptive anomaly detectors such Nested Spatial Window-Based approach(NSW) and dual window-based eigen separation transform (DWEST) ,are compared. Finally the most efficient method is proposed for implementation in a planned software system.
PARALLELIZATION OF HYPERSPECTRAL IMAGING CLASSIFICATION AND DIMENSIONALITY REDUCTION ALGORITHMS By
2004
Hyperspectral imaging provides the capability to identify and classify materials remotely. The applications of such technology is applied everywhere from medical devices and military targets to environmental sciences. With the ongoing advances in spectrometers (spatial resolution and bits per pixel density) the data gathered is constantly increasing. Some hyperspectral imaging algorithms could easily take days or weeks in analyzing a full single hyperspectral data set. In this thesis we performed a porting and parallelization of four hyperspectral algorithms representative of the type of analysis done in a typical data set. Two of the algorithms are in the area of data classification, one in the area of feature reduction and the other one is a combination of both areas. The parallelized algorithms were benchmarked on the Intel 32 bits Pentium M architecture and the new Intel 64 bits Itanium 2 architecture. For three of the four algorithms we demonstrated that the use of parallel app...