Paul Gader - Profile on Academia.edu (original) (raw)
Papers by Paul Gader
IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
Heterogeneous data fusion can enhance the robustness and accuracy of an algorithm on a given task... more Heterogeneous data fusion can enhance the robustness and accuracy of an algorithm on a given task. However, due to the difference in various modalities, aligning the sensors and embedding their information into discriminative and compact representations is challenging. In this article, we propose a contrastive learning-based multimodal alignment network to align data from different sensors into a shared and discriminative manifold where class information is preserved. The proposed architecture uses a multimodal triplet autoencoder to cluster the latent space in such a way that samples of the same classes from each heterogeneous modality are mapped close to each other. Since all the modalities exist in a shared manifold, a unified classification framework is proposed. The resulting latent space representations are fused to perform more robust and accurate classification. In a missing sensor scenario, the latent space of one sensor is easily and efficiently predicted using another sensor's latent space, thereby allowing sensor translation. We conducted extensive experiments on a manually labeled multimodal dataset containing hyperspectral data from AVIRIS-NG and NEON and light detection and ranging (LiDAR) data from NEON. Finally, the model is validated on two benchmark datasets: Berlin Dataset (hyperspectral and synthetic aperture radar) and MUUFL Gulfport Dataset (hyperspectral and LiDAR). A comparison made with other methods demonstrates the superiority of this method. We achieved a mean overall accuracy of 94.3% on the MUUFL dataset and the best overall accuracy of 71.26% on the Berlin dataset, which is better than other state-ofthe-art approaches.
Publication in the conference proceedings of EUSIPCO, Marrakech, Morocco, 2013
PeerJ, 2019
Tree species classification using hyperspectral imagery is a challenging task due to the high spe... more Tree species classification using hyperspectral imagery is a challenging task due to the high spectral similarity between species and large intra-species variability. This paper proposes a solution using the Multiple Instance Adaptive Cosine Estimator (MI-ACE) algorithm. MI-ACE estimates a discriminative target signature to differentiate between a pair of tree species while accounting for label uncertainty. Multi-class species classification is achieved by training a set of one-vs-one MI-ACE classifiers corresponding to the classification between each pair of tree species and a majority voting on the classification results from all classifiers. Additionally, the performance of MI-ACE does not rely on parameter settings that require tuning resulting in a method that is easy to use in application. Results presented are using training and testing data provided by a data analysis competition aimed at encouraging the development of methods for extracting ecological information through re...
The public reporting burden for this collection of information is estimated to average 1 hour per... more The public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggesstions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports,
IEEE transactions on image processing : a publication of the IEEE Signal Processing Society, Jan 18, 2016
The normal compositional model (NCM) has been extensively used in hyperspectral unmixing. However... more The normal compositional model (NCM) has been extensively used in hyperspectral unmixing. However, previous research has mostly focused on estimation of endmembers and/or their variability, based on the assumption that the pixels are independent random variables. In this paper, we show that this assumption does not hold if all the pixels are generated by a fixed endmember set. This introduces another concept, endmember uncertainty, which is related to whether the pixels fit into the endmember simplex. To further develop this idea, we derive the NCM from the ground up without the pixel independence assumption, along with (i) using different noise levels at different wavelengths and (ii) using a spatial and sparsity promoting prior for the abundances. The resulting new formulation is called the spatial compositional model (SCM) to better differentiate it from the NCM. The SCM maximum a posteriori (MAP) objective leads to an optimization problem featuring noise weighted least-squares m...
In the linear mixing model, many techniques for endmember extraction are based on the assumption ... more In the linear mixing model, many techniques for endmember extraction are based on the assumption that pure pixels exist in the data, and form the extremes of a simplex embedded in the data cloud. These endmembers can then be obtained by geometrical approaches, such as looking for the largest sim-plex, or by maximal orthogonal subspace projections. Also obtaining the abundances of each pixel with respect to these endmembers can be completely written in geometrical terms. While these geometrical algorithms assume Euclidean geom-etry, it has been shown that using different metrics can offer certain benefits, such as dealing with nonlinear mixing effects by using geodesic or kernel distances, or dealing with correla-tions and colored noise by using Mahalanobis metrics. In this paper, we demonstrate how a linear unmixing chain based on maximal orthogonal subspace projections and simplex pro-jection can be written in terms of distance geometry, so that other metrics can be easily employed...
IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2015
Several popular endmember extraction and unmixing algorithms are based on the geometrical interpr... more Several popular endmember extraction and unmixing algorithms are based on the geometrical interpretation of the linear mixing model, and assume the presence of pure pixels in the data. These endmembers can be identified by maximizing a simplex volume, or finding maximal distances in subsequent subspace projections, while unmixing can be considered a simplex projection problem. Since many of these algorithms can be written in terms of distance geometry, where mutual distances are the properties of interest instead of Euclidean coordinates, one can design an unmixing chain where other distance metrics are used. Many preprocessing steps such as (nonlinear) dimensionality reduction or data whitening, and several nonlinear unmixing models such as the Hapke and bilinear models, can be considered as transformations to a different data space, with a corresponding metric. In this paper, we show how one can use different metrics in geometry-based endmember extraction and unmixing algorithms, and demonstrate the results for some well-known metrics, such as the Mahalanobis distance, the Hapke model for intimate mixing, the polynomial post-nonlinear model, and graph-geodesic distances. This offers a flexible processing chain, where many models and preprocessing steps can be transparently incorporated through the use of the proper distance function.
Context-based unmixing has been studied by several re-searchers. Recent techniques, such as piece... more Context-based unmixing has been studied by several re-searchers. Recent techniques, such as piece-wise convex unmixing using fuzzy and possibilistic clustering or Bayesian methods proposed in [11] attempt to form contexts via clus-tering. It is assumed that the linear mixing model applies to each cluster (context) and endmembers and abundances are found for each cluster. As the clusters are spatially coher-ent, hyperspectral image segmentation can significantly aid unmixing approaches that perform cluster specific estimation of endmembers. In this work, we integrate a graph-cuts seg-mentation algorithm with piece-wise convex unmixing. This is compared to fuzzy clustering (FCM) with results obtained on two datasets. The results demonstrate that the integrated approach achieves better segmentation and more precise end-member identification (in terms of comparisons with known ground truth).
2013 IEEE International Geoscience and Remote Sensing Symposium - IGARSS, 2013
A new algorithm for subpixel target detection in hyperspectral imagery is proposed which uses the... more A new algorithm for subpixel target detection in hyperspectral imagery is proposed which uses the PFCM-FLICM-PCE algorithm to model and estimate the parameters of the image background. This method uses the piece-wise convex mixing model with spatial-spectral constraints, and uses possibilistic and fuzzy clustering techniques to find the piece-wise convex regions and robustly estimate the parameters. A method for integrating the elevation measurements of a co-registered LiDAR sensor is also proposed. The performance of the proposed methods is demonstrated on a real-world dataset with emplaced detection targets.
SPIE Proceedings, 2008
For complex detection and classification problems, involving data with large intraclass variation... more For complex detection and classification problems, involving data with large intraclass variations and noisy inputs, no single source of information can provide a satisfactory solution. As a result, combination of multiple classifiers is playing an increasing role in solving these complex pattern recognition problems, and has proven to be a viable alternative to using a single classifier. Over the past few years, a variety of schemes have been proposed for combining multiple classifiers. Most of these were global as they assign a degree of worthiness to each classifier, that is averaged over the entire training data. This may not be the optimal way to combine the different experts since the behavior of each one may not be uniform over the different regions of the feature space. To overcome this issue, few local methods have been proposed in the last few years. Local fusion methods aim to adapt the classifiers' worthiness to different regions of the feature space. First, they partition the input samples. Then, they identify the best classifier for each partition and designate it as the expert for that partition. Unfortunately, current local methods are either computationally expensive and/or perform these two tasks independently of each other. However, feature space partition and algorithm selection are not independent and their optimization should be simultaneous.
2010 IEEE International Geoscience and Remote Sensing Symposium, 2010
I. INTRODUCTION T HE widely-used convex geometry model for hyperspectral imagery assumes that spe... more I. INTRODUCTION T HE widely-used convex geometry model for hyperspectral imagery assumes that spectra in a hyperspectral image are the convex combinations of the endmembers in the scene [1, 2], x i = M k=1 p ik e k + i i = 1,. .. , N (1) where N is the number of pixels in the image, M is the number of endmembers, i is an error term, p ik is the proportion (abundance) of endmember k in pixel i, and e k is the k th endmember. The proportions of this model satisfy the constraints in Equation 2, p ik ≥ 0 ∀k = 1,. .. , M; M k=1 p ik = 1. (2) Given this model, spectral unmixing and endmember detection are the tasks of determining the endmembers and the proportions for every data point in the scene. Several endmember detection and spectral unmixing algorithms have been developed in the literature. However, the majority of these methods do not provide an autonomous way to estimate the number of endmembers and, thus, require the number of endmembers in advance. These methods include those based on Non-negative Matrix Factorization [3, 4], based on Indepenent Components Analysis [5, 6], and others [7, 8]. The number of endmembers is often unknown in advance. Methods to estimate the number of endmembers from a data set have been developed as well. These methods include Virtual Dimensionality (VD), Transformed Gerschogorin Disk (TGD), the Noise-Adjusted TGD, and the Partitioned Noise-Adjusted Principal Components Analysis (PNAPCA) methods [9-11]. The VD method estimates the number of endmembers using the eigenvalues of the covariance and correlation matrices of the hyperspectral data set. The number of endmembers is set to the number of eigenvalues from the covariance and correlation matrices that differ based on some computed threshold. Due to the variances used when computing the thresholds, the VD method can be sensitive to noise in the data. The PNAPCA method relies on the use of the Maximum Noise Fraction (MNF) algorithm [12]. MNF simultaneously diagonalizes the data covariance matrix and whitens the noise covariance matrix for a data set. This requires an estimate A.
2012 IEEE International Workshop on Machine Learning for Signal Processing, 2012
15. NUMBER OF PAGES 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 5c. PROGRAM ELEMENT N... more 15. NUMBER OF PAGES 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 5c. PROGRAM ELEMENT NUMBER 5b. GRANT NUMBER 5a. CONTRACT NUMBER Form Approved OMB NO. 0704-0188 3. DATES COVERED (From-To)-UU UU UU UU 17-06-2015 Approved for public release; distribution is unlimited. Context Dependent Spectral Unmixing A hyperspectral unmixing algorithm that nds multiple sets of endmembers is proposed. The algorithm, called Context Dependent Spectral Unmixing (CDSU), is a local approach that adapts the unmixing to dierent regions of the spectral space. It is based on a novel function that combines context identication and unmixing. This joint objective function models contexts as compact clusters and uses the linear mixing model as the basis for unmixing. Several variations of the CDSU, that provide additional desirable features, are also proposed. The views, opinions and/or findings contained in this report are those of the author(s) and should not contrued as an official Department of the Army position, policy or decision, unless so designated by other documentation.
IEEE Signal Processing Magazine, 2014
IEEE Transactions on Geoscience and Remote Sensing, 2010
A new hyperspectral endmember detection method that represents endmembers as distributions, auton... more A new hyperspectral endmember detection method that represents endmembers as distributions, autonomously partitions the input data set into several convex regions, and simultaneously determines endmember distributions and proportion values for each convex region is presented. Spectral unmixing methods that treat endmembers as distributions or hyperspectral images as piece-wise convex data sets have not been previously developed. Piece-wise Convex Endmember detection, PCE, can be viewed in two parts, the first, the Endmember Distributions detection (ED) algorithm, estimates a distribution for each endmember rather than estimating a single spectrum. By using endmember distributions, PCE can incorporate an endmember's inherent spectral variation and the variation due to changing environmental conditions. ED uses a new sparsity-promoting polynomial prior while estimating abundance values. The second part of PCE partitions the input hyperspectral data set into convex regions and estimates endmember distributions and proportions for each of these regions. The number of convex regions is determined autonomously using the Dirichlet process. PCE is effective at handling highly-mixed hyperspectral images where all of the pixels in the scene contain mixtures of multiple endmembers. Furthermore, each convex region found by PCE conforms to the Convex Geometry Model for hyperspectral imagery. This model requires that the proportions associated with a pixel be nonnegative and sum-to-one. Algorithm results on hyperspectral data indicate that PCE produces endmembers that represent the true ground truth classes of the input data set. The algorithm can also effectively represent endmembers as distributions, thus, incorporating an endmember's spectral variability.
IEEE Transactions on Geoscience and Remote Sensing, 2008
We develop a vegetation mapping method using long-wave hyperspectral imagery and apply it to land... more We develop a vegetation mapping method using long-wave hyperspectral imagery and apply it to landmine detection. The novel aspect of the method is that it makes use of emissivity skewness. The main purpose of vegetation detection for mine detection is to minimize false alarms. Vegetation, such as round bushes, may be mistaken as mines by mine detection algorithms, particularly in synthetic aperture radar (SAR) imagery. We employ an unsupervised vegetation detection algorithm that exploits statistics of emissivity spectra of vegetation in the long-wave infrared spectrum for identification. This information is incorporated into a Choquet integral-based fusion structure, which fuses detector outputs from hyperspectral imagery and SAR imagery. Vegetation mapping is shown to improve mine detection results over a variety of images and fusion models.
IEEE Geoscience and Remote Sensing Letters, 2007
An extension of the iterated constrained endmember (ICE) algorithm that incorporates sparsity-pro... more An extension of the iterated constrained endmember (ICE) algorithm that incorporates sparsity-promoting priors to find the correct number of endmembers is presented. In addition to solving for endmembers and endmember fractional maps, this algorithm attempts to autonomously determine the number of endmembers that are required for a particular scene. The number of endmembers is found by adding a sparsity-promoting term to ICE's objective function.
IEEE Geoscience and Remote Sensing Letters, 2008
This paper presents a simultaneous band selection and endmember detection algorithm for hyperspec... more This paper presents a simultaneous band selection and endmember detection algorithm for hyperspectral imagery. This algorithm is an extension of the Sparsity Promoting Iterated Constrained Endmembers (SPICE) algorithm. The extension adds spectral band weights and a sparsity promoting prior to the SPICE objective function to provide integrated band selection. In addition to solving for endmembers, the number of endmembers, and endmember fractional maps, this algorithm attempts to autonomously perform band selection and determine the number of spectral bands required for a particular scene. Results are presented on a simulated dataset and the AVIRIS Indian Pines dataset. Experiments on the simulated dataset show the ability to find the correct endmembers and abundance values. Experiments on the Indian Pines dataset show strong classification accuracies in comparison to previously published results.
IEEE Geoscience and Remote Sensing Letters, 2012
A compressive sensing framework is described for hyperspectral imaging. It is based on the widely... more A compressive sensing framework is described for hyperspectral imaging. It is based on the widely used linear mixing model, LM M , which represents hyperspectral pixels as convex combinations of small numbers of endmember (material) spectra. The coefficients of the endmembers for each pixel are called proportions. The endmembers and proportions are often the sought after quantities; the full image is an intermediate representation used to calculate them. Here a method for estimating proportions and endmembers directly from compressively sensed hyperspectral data based on LM M is shown. Consequently, proportions and endmembers can be calculated directly from compressively sensed data with no need to reconstruct full hyperspectral images. If spectral information is required, endmembers can be reconstructed using compressive sensing reconstruction algorithms. Furthermore, given known endmembers, the proportions of the associated materials can be measured directly using a compressive sensing imaging device. This device would produce a multi-band image; the bands would directly represent the material proportions.
IEEE Signal Processing Magazine, 2014
B lind hyperspectral unmixing (HU), also known as unsupervised HU, is one of the most prominent r... more B lind hyperspectral unmixing (HU), also known as unsupervised HU, is one of the most prominent research topics in signal processing (SP) for hyperspectral remote sensing [1], [2]. Blind HU aims at identifying materials present in a captured scene, as well as their compositions, by using high spectral resolution of hyperspectral images. It is a blind source separation (BSS) problem from a SP viewpoint. Research on this topic started in the 1990s in geoscience and remote sensing [3]-[7], enabled by technological advances in hyperspectral sensing at the time. In recent years, blind HU has attracted much interest from other fields such as SP, machine learning, and optimization, and the subsequent cross-disciplinary research activities have made blind HU a vibrant topic. The resulting impact is not just on remote sensing-blind HU has provided a unique problem scenario that inspired researchers from different fields to devise novel blind SP methods. In fact, one may say that blind HU has established a new branch of BSS approaches not seen in classical BSS studies. In particular, the convex geometry concepts-discovered by early remote sensing researchers through empirical observations [3]-[7] and refined by later research-are elegant and very different from statistical independence-based BSS approaches established in [ Insights from remote sensing ]
IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
Heterogeneous data fusion can enhance the robustness and accuracy of an algorithm on a given task... more Heterogeneous data fusion can enhance the robustness and accuracy of an algorithm on a given task. However, due to the difference in various modalities, aligning the sensors and embedding their information into discriminative and compact representations is challenging. In this article, we propose a contrastive learning-based multimodal alignment network to align data from different sensors into a shared and discriminative manifold where class information is preserved. The proposed architecture uses a multimodal triplet autoencoder to cluster the latent space in such a way that samples of the same classes from each heterogeneous modality are mapped close to each other. Since all the modalities exist in a shared manifold, a unified classification framework is proposed. The resulting latent space representations are fused to perform more robust and accurate classification. In a missing sensor scenario, the latent space of one sensor is easily and efficiently predicted using another sensor's latent space, thereby allowing sensor translation. We conducted extensive experiments on a manually labeled multimodal dataset containing hyperspectral data from AVIRIS-NG and NEON and light detection and ranging (LiDAR) data from NEON. Finally, the model is validated on two benchmark datasets: Berlin Dataset (hyperspectral and synthetic aperture radar) and MUUFL Gulfport Dataset (hyperspectral and LiDAR). A comparison made with other methods demonstrates the superiority of this method. We achieved a mean overall accuracy of 94.3% on the MUUFL dataset and the best overall accuracy of 71.26% on the Berlin dataset, which is better than other state-ofthe-art approaches.
Publication in the conference proceedings of EUSIPCO, Marrakech, Morocco, 2013
PeerJ, 2019
Tree species classification using hyperspectral imagery is a challenging task due to the high spe... more Tree species classification using hyperspectral imagery is a challenging task due to the high spectral similarity between species and large intra-species variability. This paper proposes a solution using the Multiple Instance Adaptive Cosine Estimator (MI-ACE) algorithm. MI-ACE estimates a discriminative target signature to differentiate between a pair of tree species while accounting for label uncertainty. Multi-class species classification is achieved by training a set of one-vs-one MI-ACE classifiers corresponding to the classification between each pair of tree species and a majority voting on the classification results from all classifiers. Additionally, the performance of MI-ACE does not rely on parameter settings that require tuning resulting in a method that is easy to use in application. Results presented are using training and testing data provided by a data analysis competition aimed at encouraging the development of methods for extracting ecological information through re...
The public reporting burden for this collection of information is estimated to average 1 hour per... more The public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggesstions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports,
IEEE transactions on image processing : a publication of the IEEE Signal Processing Society, Jan 18, 2016
The normal compositional model (NCM) has been extensively used in hyperspectral unmixing. However... more The normal compositional model (NCM) has been extensively used in hyperspectral unmixing. However, previous research has mostly focused on estimation of endmembers and/or their variability, based on the assumption that the pixels are independent random variables. In this paper, we show that this assumption does not hold if all the pixels are generated by a fixed endmember set. This introduces another concept, endmember uncertainty, which is related to whether the pixels fit into the endmember simplex. To further develop this idea, we derive the NCM from the ground up without the pixel independence assumption, along with (i) using different noise levels at different wavelengths and (ii) using a spatial and sparsity promoting prior for the abundances. The resulting new formulation is called the spatial compositional model (SCM) to better differentiate it from the NCM. The SCM maximum a posteriori (MAP) objective leads to an optimization problem featuring noise weighted least-squares m...
In the linear mixing model, many techniques for endmember extraction are based on the assumption ... more In the linear mixing model, many techniques for endmember extraction are based on the assumption that pure pixels exist in the data, and form the extremes of a simplex embedded in the data cloud. These endmembers can then be obtained by geometrical approaches, such as looking for the largest sim-plex, or by maximal orthogonal subspace projections. Also obtaining the abundances of each pixel with respect to these endmembers can be completely written in geometrical terms. While these geometrical algorithms assume Euclidean geom-etry, it has been shown that using different metrics can offer certain benefits, such as dealing with nonlinear mixing effects by using geodesic or kernel distances, or dealing with correla-tions and colored noise by using Mahalanobis metrics. In this paper, we demonstrate how a linear unmixing chain based on maximal orthogonal subspace projections and simplex pro-jection can be written in terms of distance geometry, so that other metrics can be easily employed...
IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2015
Several popular endmember extraction and unmixing algorithms are based on the geometrical interpr... more Several popular endmember extraction and unmixing algorithms are based on the geometrical interpretation of the linear mixing model, and assume the presence of pure pixels in the data. These endmembers can be identified by maximizing a simplex volume, or finding maximal distances in subsequent subspace projections, while unmixing can be considered a simplex projection problem. Since many of these algorithms can be written in terms of distance geometry, where mutual distances are the properties of interest instead of Euclidean coordinates, one can design an unmixing chain where other distance metrics are used. Many preprocessing steps such as (nonlinear) dimensionality reduction or data whitening, and several nonlinear unmixing models such as the Hapke and bilinear models, can be considered as transformations to a different data space, with a corresponding metric. In this paper, we show how one can use different metrics in geometry-based endmember extraction and unmixing algorithms, and demonstrate the results for some well-known metrics, such as the Mahalanobis distance, the Hapke model for intimate mixing, the polynomial post-nonlinear model, and graph-geodesic distances. This offers a flexible processing chain, where many models and preprocessing steps can be transparently incorporated through the use of the proper distance function.
Context-based unmixing has been studied by several re-searchers. Recent techniques, such as piece... more Context-based unmixing has been studied by several re-searchers. Recent techniques, such as piece-wise convex unmixing using fuzzy and possibilistic clustering or Bayesian methods proposed in [11] attempt to form contexts via clus-tering. It is assumed that the linear mixing model applies to each cluster (context) and endmembers and abundances are found for each cluster. As the clusters are spatially coher-ent, hyperspectral image segmentation can significantly aid unmixing approaches that perform cluster specific estimation of endmembers. In this work, we integrate a graph-cuts seg-mentation algorithm with piece-wise convex unmixing. This is compared to fuzzy clustering (FCM) with results obtained on two datasets. The results demonstrate that the integrated approach achieves better segmentation and more precise end-member identification (in terms of comparisons with known ground truth).
2013 IEEE International Geoscience and Remote Sensing Symposium - IGARSS, 2013
A new algorithm for subpixel target detection in hyperspectral imagery is proposed which uses the... more A new algorithm for subpixel target detection in hyperspectral imagery is proposed which uses the PFCM-FLICM-PCE algorithm to model and estimate the parameters of the image background. This method uses the piece-wise convex mixing model with spatial-spectral constraints, and uses possibilistic and fuzzy clustering techniques to find the piece-wise convex regions and robustly estimate the parameters. A method for integrating the elevation measurements of a co-registered LiDAR sensor is also proposed. The performance of the proposed methods is demonstrated on a real-world dataset with emplaced detection targets.
SPIE Proceedings, 2008
For complex detection and classification problems, involving data with large intraclass variation... more For complex detection and classification problems, involving data with large intraclass variations and noisy inputs, no single source of information can provide a satisfactory solution. As a result, combination of multiple classifiers is playing an increasing role in solving these complex pattern recognition problems, and has proven to be a viable alternative to using a single classifier. Over the past few years, a variety of schemes have been proposed for combining multiple classifiers. Most of these were global as they assign a degree of worthiness to each classifier, that is averaged over the entire training data. This may not be the optimal way to combine the different experts since the behavior of each one may not be uniform over the different regions of the feature space. To overcome this issue, few local methods have been proposed in the last few years. Local fusion methods aim to adapt the classifiers' worthiness to different regions of the feature space. First, they partition the input samples. Then, they identify the best classifier for each partition and designate it as the expert for that partition. Unfortunately, current local methods are either computationally expensive and/or perform these two tasks independently of each other. However, feature space partition and algorithm selection are not independent and their optimization should be simultaneous.
2010 IEEE International Geoscience and Remote Sensing Symposium, 2010
I. INTRODUCTION T HE widely-used convex geometry model for hyperspectral imagery assumes that spe... more I. INTRODUCTION T HE widely-used convex geometry model for hyperspectral imagery assumes that spectra in a hyperspectral image are the convex combinations of the endmembers in the scene [1, 2], x i = M k=1 p ik e k + i i = 1,. .. , N (1) where N is the number of pixels in the image, M is the number of endmembers, i is an error term, p ik is the proportion (abundance) of endmember k in pixel i, and e k is the k th endmember. The proportions of this model satisfy the constraints in Equation 2, p ik ≥ 0 ∀k = 1,. .. , M; M k=1 p ik = 1. (2) Given this model, spectral unmixing and endmember detection are the tasks of determining the endmembers and the proportions for every data point in the scene. Several endmember detection and spectral unmixing algorithms have been developed in the literature. However, the majority of these methods do not provide an autonomous way to estimate the number of endmembers and, thus, require the number of endmembers in advance. These methods include those based on Non-negative Matrix Factorization [3, 4], based on Indepenent Components Analysis [5, 6], and others [7, 8]. The number of endmembers is often unknown in advance. Methods to estimate the number of endmembers from a data set have been developed as well. These methods include Virtual Dimensionality (VD), Transformed Gerschogorin Disk (TGD), the Noise-Adjusted TGD, and the Partitioned Noise-Adjusted Principal Components Analysis (PNAPCA) methods [9-11]. The VD method estimates the number of endmembers using the eigenvalues of the covariance and correlation matrices of the hyperspectral data set. The number of endmembers is set to the number of eigenvalues from the covariance and correlation matrices that differ based on some computed threshold. Due to the variances used when computing the thresholds, the VD method can be sensitive to noise in the data. The PNAPCA method relies on the use of the Maximum Noise Fraction (MNF) algorithm [12]. MNF simultaneously diagonalizes the data covariance matrix and whitens the noise covariance matrix for a data set. This requires an estimate A.
2012 IEEE International Workshop on Machine Learning for Signal Processing, 2012
15. NUMBER OF PAGES 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 5c. PROGRAM ELEMENT N... more 15. NUMBER OF PAGES 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 5c. PROGRAM ELEMENT NUMBER 5b. GRANT NUMBER 5a. CONTRACT NUMBER Form Approved OMB NO. 0704-0188 3. DATES COVERED (From-To)-UU UU UU UU 17-06-2015 Approved for public release; distribution is unlimited. Context Dependent Spectral Unmixing A hyperspectral unmixing algorithm that nds multiple sets of endmembers is proposed. The algorithm, called Context Dependent Spectral Unmixing (CDSU), is a local approach that adapts the unmixing to dierent regions of the spectral space. It is based on a novel function that combines context identication and unmixing. This joint objective function models contexts as compact clusters and uses the linear mixing model as the basis for unmixing. Several variations of the CDSU, that provide additional desirable features, are also proposed. The views, opinions and/or findings contained in this report are those of the author(s) and should not contrued as an official Department of the Army position, policy or decision, unless so designated by other documentation.
IEEE Signal Processing Magazine, 2014
IEEE Transactions on Geoscience and Remote Sensing, 2010
A new hyperspectral endmember detection method that represents endmembers as distributions, auton... more A new hyperspectral endmember detection method that represents endmembers as distributions, autonomously partitions the input data set into several convex regions, and simultaneously determines endmember distributions and proportion values for each convex region is presented. Spectral unmixing methods that treat endmembers as distributions or hyperspectral images as piece-wise convex data sets have not been previously developed. Piece-wise Convex Endmember detection, PCE, can be viewed in two parts, the first, the Endmember Distributions detection (ED) algorithm, estimates a distribution for each endmember rather than estimating a single spectrum. By using endmember distributions, PCE can incorporate an endmember's inherent spectral variation and the variation due to changing environmental conditions. ED uses a new sparsity-promoting polynomial prior while estimating abundance values. The second part of PCE partitions the input hyperspectral data set into convex regions and estimates endmember distributions and proportions for each of these regions. The number of convex regions is determined autonomously using the Dirichlet process. PCE is effective at handling highly-mixed hyperspectral images where all of the pixels in the scene contain mixtures of multiple endmembers. Furthermore, each convex region found by PCE conforms to the Convex Geometry Model for hyperspectral imagery. This model requires that the proportions associated with a pixel be nonnegative and sum-to-one. Algorithm results on hyperspectral data indicate that PCE produces endmembers that represent the true ground truth classes of the input data set. The algorithm can also effectively represent endmembers as distributions, thus, incorporating an endmember's spectral variability.
IEEE Transactions on Geoscience and Remote Sensing, 2008
We develop a vegetation mapping method using long-wave hyperspectral imagery and apply it to land... more We develop a vegetation mapping method using long-wave hyperspectral imagery and apply it to landmine detection. The novel aspect of the method is that it makes use of emissivity skewness. The main purpose of vegetation detection for mine detection is to minimize false alarms. Vegetation, such as round bushes, may be mistaken as mines by mine detection algorithms, particularly in synthetic aperture radar (SAR) imagery. We employ an unsupervised vegetation detection algorithm that exploits statistics of emissivity spectra of vegetation in the long-wave infrared spectrum for identification. This information is incorporated into a Choquet integral-based fusion structure, which fuses detector outputs from hyperspectral imagery and SAR imagery. Vegetation mapping is shown to improve mine detection results over a variety of images and fusion models.
IEEE Geoscience and Remote Sensing Letters, 2007
An extension of the iterated constrained endmember (ICE) algorithm that incorporates sparsity-pro... more An extension of the iterated constrained endmember (ICE) algorithm that incorporates sparsity-promoting priors to find the correct number of endmembers is presented. In addition to solving for endmembers and endmember fractional maps, this algorithm attempts to autonomously determine the number of endmembers that are required for a particular scene. The number of endmembers is found by adding a sparsity-promoting term to ICE's objective function.
IEEE Geoscience and Remote Sensing Letters, 2008
This paper presents a simultaneous band selection and endmember detection algorithm for hyperspec... more This paper presents a simultaneous band selection and endmember detection algorithm for hyperspectral imagery. This algorithm is an extension of the Sparsity Promoting Iterated Constrained Endmembers (SPICE) algorithm. The extension adds spectral band weights and a sparsity promoting prior to the SPICE objective function to provide integrated band selection. In addition to solving for endmembers, the number of endmembers, and endmember fractional maps, this algorithm attempts to autonomously perform band selection and determine the number of spectral bands required for a particular scene. Results are presented on a simulated dataset and the AVIRIS Indian Pines dataset. Experiments on the simulated dataset show the ability to find the correct endmembers and abundance values. Experiments on the Indian Pines dataset show strong classification accuracies in comparison to previously published results.
IEEE Geoscience and Remote Sensing Letters, 2012
A compressive sensing framework is described for hyperspectral imaging. It is based on the widely... more A compressive sensing framework is described for hyperspectral imaging. It is based on the widely used linear mixing model, LM M , which represents hyperspectral pixels as convex combinations of small numbers of endmember (material) spectra. The coefficients of the endmembers for each pixel are called proportions. The endmembers and proportions are often the sought after quantities; the full image is an intermediate representation used to calculate them. Here a method for estimating proportions and endmembers directly from compressively sensed hyperspectral data based on LM M is shown. Consequently, proportions and endmembers can be calculated directly from compressively sensed data with no need to reconstruct full hyperspectral images. If spectral information is required, endmembers can be reconstructed using compressive sensing reconstruction algorithms. Furthermore, given known endmembers, the proportions of the associated materials can be measured directly using a compressive sensing imaging device. This device would produce a multi-band image; the bands would directly represent the material proportions.
IEEE Signal Processing Magazine, 2014
B lind hyperspectral unmixing (HU), also known as unsupervised HU, is one of the most prominent r... more B lind hyperspectral unmixing (HU), also known as unsupervised HU, is one of the most prominent research topics in signal processing (SP) for hyperspectral remote sensing [1], [2]. Blind HU aims at identifying materials present in a captured scene, as well as their compositions, by using high spectral resolution of hyperspectral images. It is a blind source separation (BSS) problem from a SP viewpoint. Research on this topic started in the 1990s in geoscience and remote sensing [3]-[7], enabled by technological advances in hyperspectral sensing at the time. In recent years, blind HU has attracted much interest from other fields such as SP, machine learning, and optimization, and the subsequent cross-disciplinary research activities have made blind HU a vibrant topic. The resulting impact is not just on remote sensing-blind HU has provided a unique problem scenario that inspired researchers from different fields to devise novel blind SP methods. In fact, one may say that blind HU has established a new branch of BSS approaches not seen in classical BSS studies. In particular, the convex geometry concepts-discovered by early remote sensing researchers through empirical observations [3]-[7] and refined by later research-are elegant and very different from statistical independence-based BSS approaches established in [ Insights from remote sensing ]