Instance-based concept learning from multiclass DNA microarray data - PubMed (original) (raw)

Instance-based concept learning from multiclass DNA microarray data

Daniel Berrar et al. BMC Bioinformatics. 2006.

Abstract

Background: Various statistical and machine learning methods have been successfully applied to the classification of DNA microarray data. Simple instance-based classifiers such as nearest neighbor (NN) approaches perform remarkably well in comparison to more complex models, and are currently experiencing a renaissance in the analysis of data sets from biology and biotechnology. While binary classification of microarray data has been extensively investigated, studies involving multiclass data are rare. The question remains open whether there exists a significant difference in performance between NN approaches and more complex multiclass methods. Comparative studies in this field commonly assess different models based on their classification accuracy only; however, this approach lacks the rigor needed to draw reliable conclusions and is inadequate for testing the null hypothesis of equal performance. Comparing novel classification models to existing approaches requires focusing on the significance of differences in performance.

Results: We investigated the performance of instance-based classifiers, including a NN classifier able to assign a degree of class membership to each sample. This model alleviates a major problem of conventional instance-based learners, namely the lack of confidence values for predictions. The model translates the distances to the nearest neighbors into 'confidence scores'; the higher the confidence score, the closer is the considered instance to a pre-defined class. We applied the models to three real gene expression data sets and compared them with state-of-the-art methods for classifying microarray data of multiple classes, assessing performance using a statistical significance test that took into account the data resampling strategy. Simple NN classifiers performed as well as, or significantly better than, their more intricate competitors.

Conclusion: Given its highly intuitive underlying principles--simplicity, ease-of-use, and robustness--the k-NN classifier complemented by a suitable distance-weighting regime constitutes an excellent alternative to more complex models for multiclass microarray data sets. Instance-based classifiers using weighted distances are not limited to microarray data sets, but are likely to perform competitively in classifications of high-dimensional biological data sets such as those generated by high-throughput mass spectrometry.

PubMed Disclaimer

Figures

Figure 1

Figure 1

Prediction errors on the NCI60 data set. The total number of misclassified cases in all ten folds are: 41 by distance-weighted _k_-NN, 41 by 1-NN, 54 by 3-NN, 31 by SVM, 55 by DT, and 57 by MLP.

Figure 3

Figure 3

Prediction errors on the GCM data set. The total number of misclassified cases in all ten folds are: 122 by distance-weighted _k_-NN, 120 by 1-NN, 136 by 3-NN, 136 by 5-NN, 115 by SVM, 168 by DT, and 215 by MLP.

Figure 4

Figure 4

Sampling of learning and test set and selection of marker genes. Depicted is one fold in the ten-fold resampling procedure. From the original data set comprising n cases and p genes, ~70% of the cases are randomly selected for the learning set L i and ~30% cases for the test set T i. On the learning set L i with unpermuted class labels, the signal-to-noise weight for each gene and each class is computed as illustrated for class B. The class labels are then randomly permuted 1,000 times and the signal-to-noise weights (for each gene and each class) are recomputed for each permutation to assess the significance of the weights for the unpermuted learning set. Both the learning and the test set are filtered to contain only those genes that are significantly differently expressed in the learning set.

Figure 5

Figure 5

The distance-weighted _k_-NN classifier for a binary classification task. The arrows indicate the three nearest neighbors of the test case. Here it is assumed that k opt = 3.

Figure 2

Figure 2

Prediction errors on the ALL data set. The total number of misclassified cases in all ten folds are: 247 by distance-weighted _k_-NN, 257 by 1-NN, 248 by 3-NN, 248 by 5-NN, 250 by SVM, 348 by DT, and 333 by MLP.

References

    1. Ross DT, Scherf U, Eisen MB, Perou CM, Rees C, Spellman P, Iyer V, Jeffrey SS, van de Rijn M, Waltham M, Pergamenschikov A, Lee JC, Lashkari D, Shalon D, Myers TG, Weinstein JN, Botstein D, Brown PO. Systematic variation in gene expression patterns in human cancer cell lines. Nat Gen. 2000;24:227–235. - PubMed
    1. Ramaswamy S, Tamayo P, Rifkin R, Mukherjee S, Yeang CH, Angelo MLC, Reich M, Latulippe E, Mesirov JP, Poggio T, Gerald W, Loda M, Lander ES, Golub TR. Multiclass cancer diagnosis using tumor gene expression signatures. Proc Natl Acad Sci USA. 2001;98:15149–15154. - PMC - PubMed
    1. Yeoh EJ, Ross ME, Shurtleff SA, Williams WK, Patel D, Mahfouz R, Behm FG, Raimondi SC, Relling MV, Patel A, Cheng C, Campana D, Wilkins D, Zhou X, Li J, Liu H, Pui CH, Evans WE, Naeve C, Wong L, Downing JR. Classification, subtype discovery, and prediction of outcome in pediatric acute lymphoblastic leukemia by gene expression profiling. Cancer Cell. 2002;1:133–143. - PubMed
    1. Somorjai RL, Dolenko B, Baumgartner R. Class prediction and discovery using gene microarray and proteomics mass spectroscopy data: curses, caveats, cautions. Bioinformatics. 2003;19:1484–1491. - PubMed
    1. Dudoit S, Fridlyand J. Introduction to classification in microarray experiments. In: Berrar D, Dubitzky W, Granzow M, editor. A Practical Approach to Microarray Data Analysis. Boston: Kluwer Academic Publishers; 2002. pp. 131–151.

Publication types

MeSH terms

LinkOut - more resources