FEATURE EXTRACTION AND CLASSIFICATION SC (original) (raw)
Related papers
Feature selection, extraction and construction
2002
Abstract Feature selection is a process that chooses a subset of features from the original features so that the feature space is optimally reduced according to a certain criterion. Feature extraction/construction is a process through which a set of new features is created. They are used either in isolation or in combination. All attempt to improve performance such as estimated accuracy, visualization and comprehensibility of learned knowledge. Basic approaches to these three are reviewed giving pointers to references for further studies.
2015
In this paper we propose to use local gradient feature descriptors, namely the scale invariant feature transform keypoint descriptor and the histogram of oriented gradients, for handwritten character recognition. The local gradient feature descriptors are used to extract feature vectors from the handwritten images, which are then presented to a machine learning algorithm to do the actual classification. As classifiers, the k-nearest neighbor and the support vector machine algorithms are used. We have evaluated these feature descriptors and classifiers on three different language scripts, namely Thai, Bangla, and Latin, consisting of both handwritten characters and digits. The results show that the local gradient feature descriptors significantly outperform directly using pixel intensities from the images. When the proposed feature descriptors are combined with the support vector machine, very high accuracies are obtained on the Thai handwritten datasets (character and digit), the Latin handwritten datasets (character and digit), and the Bangla handwritten digit dataset.
IEEE-A Detailed Review of Feature Extraction in Image Processing Systems.pdf
Feature plays a very important role in the area of image processing. Before getting features, various image preprocessing techniques like binarization, thresholding, resizing, normalization etc. are applied on the sampled image. After that, feature extraction techniques are applied to get features that will be useful in classifying and recognition of images. Feature extraction techniques are helpful in various image processing applications e.g. character recognition. As features define the behavior of an image, they show its place in terms of storage taken, efficiency in classification and obviously in time consumption also. Here in this paper, we are going to discuss various types of features, feature extraction techniques and explaining in what scenario, which features extraction technique, will be better. Hereby in this paper, we are going to refer features and feature extraction methods in case of character recognition application.
Feature Extraction for Classification: A Survey I. Linear Methods
The growing volume of information available as input in pattern classification systems has imposed the use of pre-processing techniques which have the purpose of reducing the feature space dimensions. The references in this domain are very large and only an exhaustive approach can be made. This article presents the most used linear methods, comparing their advantages and disadvantages, and showing the best suited methods for several categories of data.
Analysis and Classification of Feature Extraction Techniques : A Study
2012
A detailed study on feature extractors in spatial and transformed domain is carried out in this work. The survey in Spatial domain include most of the traditional detectors until recently the SIFT and its variants. In the transformed domain, the detectors developed using the Fourier transforms to wavelet transforms have been explored. The advantages and the limitations of each one of them is explained along with the results. Depending upon the application in hand together with time complexity and accuracy, an appropriate choice of the suitable detector has to be made.
Feature Selection Based on Statistical Analysis
2005
Abstract In most pattern recognition (PR) system, selecting the best feature vectors is an important task. Feature vectors serve as a reduced representation of the original data that facilitate us to evade the curse of dimensionality in a PR task.
Applying Feature Extraction for Classification Problems
2009
With the wealth of image data that is now becoming increasingly accessible through the advent of the world wide web and the proliferation of cheap, high quality digital cameras it is becoming ever more desirable to be able to automatically classify images into appropriate categories such that intelligent agents and other such intelligent software might make better informed decisions regarding them without a need for excessive human intervention. However, as with most Artificial Intelligence (A.I.) methods it is seen as necessary to take small steps towards your goal. With this in mind a method is proposed here to represent localised features using disjoint sub-images taken from several datasets of retinal images for their eventual use in an incremental learning system. A tile-based localised adaptive threshold selection method was taken for vessel segmentation based on separate colour components. Arteriole-venous differentiation was made possible by using the composite of these components and high quality fundal images. Performance was evaluated on the DRIVE and STARE datasets achieving average specificity of 0.9379 and sensitivity of 0.5924.
A Novel Feature Selection and Extraction Technique for Classification
Pattern recognition is a vast field which has seen significant advances over the years. As the datasets under consideration grow larger and more comprehensive, using efficient techniques to process them becomes increasingly important. We present a versatile technique for the purpose of feature selection and extraction -Class Dependent Features (CDFs). CDFs identify the features innate to a class and extract them accordingly. The features thus extracted are relevant to the entire class and not just to the individual data item. This paper focuses on using CDFs to improve the accuracy of classification and at the same time control computational expense by tackling the curse of dimensionality. In order to demonstrate the generality of this technique, it is applied to two problem statements which have very little in common with each other -handwritten digit recognition and text categorization. It is found that for both problem statements, the accuracy is comparable to state-of-the-art results and the speed of the operation is considerably greater. Results are presented for Reuters-21578 and Web-KB datasets relating to text categorization and the MNIST and USPS datasets for handwritten digit recognition.