A random forest-based framework for crop mapping using temporal, spectral, textural and polarimetric observations (original) (raw)
Related papers
International Journal of Remote Sensing, 2018
Remote sensing image classification is a common application of remote sensing images. In order to improve the performance of Remote sensing image classification, multiple classifier combinations are used to classify the Landsat-8 Operational Land Imager (Landsat-8 OLI) images. Some techniques and classifier combination algorithms are investigated. The classifier ensemble consisting of five member classifiers is constructed. The results of every member classifier are evaluated. The voting strategy is experimented to combine the classification results of the member classifier. The results show that all the classifiers have different performances and the multiple classifier combination provides better performance than a single classifier, and achieves higher overall accuracy of classification. The experiment shows that the multiple classifier combination using producer's accuracy as voting-weight (MCC mod2 and MCC mod3) present higher classification accuracy than the algorithm using overall accuracy as voting-weight (MCC mod1).And the multiple classifier combinations using different voting-weights affected the classification result in different landcover types. The multiple classifier combination algorithm presented in this article using voting-weight based on the accuracy of multiple classifier may have stability problems, which need to be addressed in future studies.
The International Arab Journal of Information Technology, 2018
This study evaluates an approach for Land-Use Land-Cover classification (LULC) using multispectral satellite images. This proposed approach uses the Bagging Ensemble (BE) technique with Random Forest (RF) as a base classifier for improving classification performance by reducing errors and prediction variance. A pixel-based supervised classification technique with Principle Component Analysis (PCA) for feature selection from available attributes using a Landsat 8 image is developed. These attributes include coastal, visible, near-infrared, shortwave infrared and thermal bands in addition to Normalized Difference Vegetation Index (NDVI) and Normalized Difference Water Index (NDWI). The study is performed in a heterogeneous coastal area divided into five classes: water, vegetation, grass-lake-type, sand, and building. To evaluate the classification accuracy of BE with RF, it is compared to BE with Support Vector Machine (SVM) and Neural Network (NN) as base classifiers. The results are evaluated using the following output: commission, omission errors, and overall accuracy. The results showed that the proposed approach using BE with RF outperforms SVM and NN classifiers with 93.3% overall accuracy. The BE with SVM and NN classifiers yielded 92.6% and 92.1% overall accuracy, respectively. It is revealed that using BE with RF as a base classifier outperforms other base classifiers as SVM and NN. In addition, omission and commission errors were reduced by using BE with RF and NN classifiers.
Remote Sensing
In recent years, several powerful machine learning (ML) algorithms have been developed for image classification, especially those based on ensemble learning (EL). In particular, Extreme Gradient Boosting (XGBoost) and Light Gradient Boosting Machine (LightGBM) methods have attracted researchers’ attention in data science due to their superior results compared to other commonly used ML algorithms. Despite their popularity within the computer science community, they have not yet been well examined in detail in the field of Earth Observation (EO) for satellite image classification. As such, this study investigates the capability of different EL algorithms, generally known as bagging and boosting algorithms, including Adaptive Boosting (AdaBoost), Gradient Boosting Machine (GBM), XGBoost, LightGBM, and Random Forest (RF), for the classification of Remote Sensing (RS) data. In particular, different classification scenarios were designed to compare the performance of these algorithms on t...
Remote sensing image classification based on neural network ensemble algorithm
2012
The amounts and types of remote sensing data have increased rapidly, and the classification of these datasets has become more and more overwhelming for a single classifier in practical applications. In this paper, an ensemble algorithm based on Diversity Ensemble Creation by Oppositional Relabeling of Artificial Training Examples (DECORATEs) and Rotation Forest is proposed to solve the classification problem of remote sensing image. In this ensemble algorithm, the RBF neural networks are employed as base classifiers. Furthermore, interpolation technology for identical distribution is used to remold the input datasets. These remolded datasets will construct new classifiers besides the initial classifiers constructed by the Rotation Forest algorithm. The change of classification error is used to decide whether to add another new classifier. Therefore, the diversity among these classifiers will be enhanced and the accuracy of classification will be improved. Adaptability of the proposed algorithm is verified in experiments implemented on standard datasets and actual remote sensing dataset.
Hyperspectral remote sensing image classification based on decision level fusion (Chinese Title
Chinese Optics Letters, 2011
In this letter, a ensemble learning approach, Rotation Forest, has been applied to hyperspectral remote sensing image classification for the first time. The framework of Rotation Forest is to project the original data into a new feature space using transformation methods for each base classifier (decision tree), then the base classifier can train in different new spaces for the purpose of encouraging both individual accuracy and diversity within the ensemble simultaneously. Principal component analysis (PCA), maximum noise fraction (MNF), independent component analysis (ICA) and local fisher discriminant analysis (LFDA) are introduced as feature transformation algorithms in the original Rotation Forest. The performance of Rotation Forest was evaluated based on several criteria: different data sets, sensitivity to the number of training samples, ensemble size and the number of features in a subset. Experimental results revealed that Rotation Forest, especially with PCA transformation, could produce more accurate results than Bagging, AdaBoost and Random Forest. They indicate that Rotation Forests are promising approaches for generating classifier ensemble of hyperspectral remote sensing.
Improving Remote Sensing Multiple Classification by Data and Ensemble Selection
Photogrammetric Engineering & Remote Sensing, 2021
In this article, margin theory is exploited to design better ensemble classifiers for remote sensing data. A semi-supervised version of the ensemble margin is at the core of this work. Some major challenges in ensemble learning are investigated using this paradigm in the difficult context of land cover classification: selecting the most informative instances to form an appropriate training set, and selecting the best ensemble members. The main contribution of this work lies in the explicit use of the ensemble margin as a decision method to select training data and base classifiers in an ensemble learning framework. The selection of training data is achieved through an innovative iterative guided bagging algorithm exploiting low-margin instances. The overall classification accuracy is improved by up to 3%, with more dramatic improvement in per-class accuracy (up to 12%). The selection of ensemble base classifiers is achieved by an ordering-based ensemble-selection algorithm relying o...
Voting Combinations Based Ensemble: A Hybrid Approach
Celal Bayar University Journal of Science, 2022
In the field of Artificial Intelligence (AI), Machine Learning (ML) is a well-known and actively researched concept that assists to strengthen the accomplishment of classification results. The primary goal of this study is to categories and analyze ML and Ensemble Learning (EL) techniques. Six algorithms Bagging, C4.5 (J48), Stacking, Support Vector Machine (SVM), Naive Bayes (NB), and Boosting as well as the five UCI Datasets of ML Repository are being used to support this notion. These algorithms show the robustness and effectiveness of numerous approaches. To improve the performance, a voting-based ensemble classifier has been developed in this research along with two base learners (namely, Random Forest and Rotation Forest). Whereas important parameters have been taken into account for analytical processes, including: F-measure values, recall, precision, Area under Curve (Auc), and accuracy values. As a result, the main goal of this research is to improve binary classification and values by enhancing ML and EL approaches. We illustrate the experimental results that demonstrate the superiority of our model approach over well-known competing strategies. Image recognition and ML challenges, such as binary classification, can be solved using this method.
Classification of remote sensing data using margin-based ensemble methods
2013 IEEE International Conference on Image Processing, 2013
This work exploits the margin theory to design better ensemble classifiers for remote sensing data. The margin paradigm is at the core of a new bagging algorithm. This method increases the classification accuracy, particularly in case of difficult classes, and significantly reduces the training set size. The same margin framework is used to derive a novel ensemble pruning algorithm. This method not only highly reduces the complexity of ensemble methods but also performs better than complete bagging in handling minority classes. Our techniques have been successfully used for the classification of remote sensing data.
Pattern Recognition Letters, 2007
One of the most important steps in the design of a multi-classifier system (MCS), also known as ensemble, is the choice of the components (classifiers). This step is very important to the overall performance of a MCS since the combination of a set of identical classifiers will not outperform the individual members. The ideal situation would be a set of classifiers with uncorrelated errors -they would be combined in such a way as to minimize the effect of these failures. This paper presents an extensive evaluation of how the choice of the components (classifiers) can affect the performance of several combination methods (selection-based and fusion-based methods). An analysis of the diversity of the MCSs when varying their components is also performed. As a result of this analysis, it is aimed to help designers in the choice of the individual classifiers and combination methods of an ensemble.
Investigation of diversity and accuracy in ensemble of classifiers using Bayesian decision rules
2008 International Workshop on Earth Observation and Remote Sensing Applications, 2008
Multiple Classifier System (MCS) has attracted increasing interest in the field of pattern recognition and machine learning where this technique has also been introduced in remote sensing. The importance of classifier diversity in MCS has been raised recently; nevertheless, only a few of the researches have been studied in land cover classification problem. In this paper, a SPOT IV satellite image covering the Hong Kong Island and Kowloon Peninsula with six land cover classes were classified with four base classifiers: Minimum Distance Classifier, Maximum Likelihood Classifier, Mahalanobis Classifier and K-Nearest Neighbor Classifier. Same training and testing data sets were applied throughout the experiments and five Bayesian decision rules, including product rule, sum rule, max rule, min rule, and median rule, were utilized to construct different ensemble of classifiers. Performance of MCS was measured using the overall accuracy and kappa statistics, and three statistical tests including McNemar's Test, Cochran's Q Test and F-Test were introduced to examine the dependence of the classification results. The experimental comparison reveals that i. increasing number of base classifiers may not improve the overall accuracy in MCS, ii. significant diversity in base classifiers cannot enhance the overall performance and vice versa. These findings are noted with the condition in using the same data set and the same training set.