Melanoma Recognition (original) (raw)
Related papers
Fusion of structural and textural features for melanoma recognition
IET Computer Vision, 2018
Melanoma is one the most increasing cancers since past decades. For accurate detection and classification, discriminative features are required to distinguish between benign and malignant cases. In this study, the authors introduce a fusion of structural and textural features from two descriptors. The structural features are extracted from wavelet and curvelet transforms, whereas the textural features are extracted from different variants of local binary pattern operator. The proposed method is implemented on 200 images from dermoscopy database including 160 non‐melanoma and 40 melanoma images, where a rigorous statistical analysis for the database is performed. Using support vector machine (SVM) classifier with random sampling cross‐validation method between the three cases of skin lesions given in the database, the validated results showed a very encouraging performance with a sensitivity of 78.93%, a specificity of 93.25% and an accuracy of 86.07%. The proposed approach outperfor...
Skin Melanoma Cancer Detection and Classification using Machine Learning
International Journal of Scientific Research in Science and Technology, 2023
Skin cancer is a common and potentially life-threatening disease that affects millions of people worldwide. Early detection and accurate classification of skin lesions are critical for effective therapy and improved patient outcomes. In recent years, advances in machine learning and computer vision techniques have shown promising results in automating skin cancer detection and classification. The goal of this project is to develop an automated system for skin cancer detection and classification using machine learning algorithms. The proposed system uses a dataset of dermatoscopic images collected from different sources covering different types of skin lesions, including malignant melanoma, basal cell carcinoma, and squamous cell carcinoma. The project includes several main stages. First, preprocessing techniques including noise reduction, normalization, and feature extraction are used to improve image quality. Next, a comprehensive set of features such as color, texture, and portrait features are extracted from the preprocessed images. These features are input to various machine learning models, including Convolutional Neural Networks (CNNs), Support Vector Machines (SVMs) and Random Forests.
Melanoma Recognition Using Representative and Discriminative Kernel Classifiers
Malignant melanoma is the most deadly form of skin lesion. Early diagnosis is of critical importance to patient survival. Existent visual recognition algorithms for skin lesions classification focus mostly on segmentation and feature extraction. In this paper instead we put the emphasis on the learning process by using two kernel-based classifiers. We chose a discriminative approach using support vector machines, and a probabilistic approach using spin glass-Markov random fields. We benchmarked these algorithms against the (to our knowledge) state-of-the-art method on melanoma recognition, exploring how performance changes by using color or textural features, and how it is affected by the quality of the segmentation mask. We show with extensive experiments that the support vector machine approach outperforms the existing method and, on two classes out of three, it achieves performances comparable to those obtained by expert clinicians.
Performance Analysis of Low-Level and High-Level Intuitive Features for Melanoma Detection
Electronics
This paper presents an intelligent approach for the detection of Melanoma—a deadly skin cancer. The first step in this direction includes the extraction of the textural features of the skin lesion along with the color features. The extracted features are used to train the Multilayer Feed-Forward Artificial Neural Networks. We evaluate the trained networks for the classification of test samples. This work entails three sets of experiments including 50 % , 70 % and 90 % of the data used for training, while the remaining 50 % , 30 % , and 10 % constitute the test sets. Haralick’s statistical parameters are computed for the extraction of textural features from the lesion. Such parameters are based on the Gray Level Co-occurrence Matrices (GLCM) with an offset of 2 , 4 , 8 , 12 , 16 , 20 , 24 and 28, each with an angle of 0 , 45 , 90 and 135 degrees, respectively. In order to distill color features, we have calculated the mean, median and standard deviation of the three color planes of t...
Classification Models for Skin Tumor Detection Using Texture Analysis in Medical Images
Journal of Imaging
Medical images have made a great contribution to early diagnosis. In this study, a new strategy is presented for analyzing medical images of skin with melanoma and nevus to model, classify and identify lesions on the skin. Machine learning applied to the data generated by first and second order statistics features, Gray Level Co-occurrence Matrix (GLCM), keypoints and color channel information—Red, Green, Blue and grayscale images of the skin were used to characterize decisive information for the classification of the images. This work proposes a strategy for the analysis of skin images, aiming to choose the best mathematical classifier model, for the identification of melanoma, with the objective of assisting the dermatologist in the identification of melanomas, especially towards an early diagnosis.
An efficient 3D color-texture feature and neural network technique for melanoma detection
Informatics in Medicine Unlocked, 2019
Malignant melanoma is the deadliest form of skin cancer, but can be more readily treated successfully if detected in its early stages. Due to the increasing incidence of melanoma, research in the field of autonomous melanoma detection has accelerated. In this paper, a new method for feature extraction from dermoscopic images, termed multi-direction 3D color-texture feature (CTF), is proposed, and detection is performed using a back propagation multilayer neural network (NN) classifier. The proposed method is tested on the PH 2 dataset (publicly available) in terms of accuracy, sensitivity, and specificity. The extracted combined CTF is fairly discriminative. When it is input and tested in a neural network classifier that is provided, encouraging results are obtained, i.e. accuracy = 97.5%, sensitivity = 98.1% and specificity = 93.84%. Comparative result analyses with other methods are also discussed, and the results are also improved over benchmarking results for the PH2 dataset.