Medical Image Classification with On-Premise AutoML: Unveiling Insights through Comparative Analysis (original) (raw)
Related papers
The Lancet Digital Health
Background Deep learning has the potential to transform health care; however, substantial expertise is required to train such models. We sought to evaluate the utility of automated deep learning software to develop medical image diagnostic classifiers by health-care professionals with no coding-and no deep learning-expertise. Methods We used five publicly available open-source datasets: retinal fundus images (MESSIDOR); optical coherence tomography (OCT) images (Guangzhou Medical University and Shiley Eye Institute, version 3); images of skin lesions (Human Against Machine [HAM] 10000), and both paediatric and adult chest x-ray (CXR) images (Guangzhou Medical University and Shiley Eye Institute, version 3 and the National Institute of Health [NIH] dataset, respectively) to separately feed into a neural architecture search framework, hosted through Google Cloud AutoML, that automatically developed a deep learning architecture to classify common diseases. Sensitivity (recall), specificity, and positive predictive value (precision) were used to evaluate the diagnostic properties of the models. The discriminative performance was assessed using the area under the precision recall curve (AUPRC). In the case of the deep learning model developed on a subset of the HAM10000 dataset, we did external validation using the Edinburgh Dermofit Library dataset. Findings Diagnostic properties and discriminative performance from internal validations were high in the binary classification tasks (sensitivity 73•3-97•0%; specificity 67-100%; AUPRC 0•87-1•00). In the multiple classification tasks, the diagnostic properties ranged from 38% to 100% for sensitivity and from 67% to 100% for specificity. The discriminative performance in terms of AUPRC ranged from 0•57 to 1•00 in the five automated deep learning models. In an external validation using the Edinburgh Dermofit Library dataset, the automated deep learning model showed an AUPRC of 0•47, with a sensitivity of 49% and a positive predictive value of 52%. Interpretation All models, except the automated deep learning model trained on the multilabel classification task of the NIH CXR14 dataset, showed comparable discriminative performance and diagnostic properties to state-of-the-art performing deep learning algorithms. The performance in the external validation study was low. The quality of the open-access datasets (including insufficient information about patient flow and demographics) and the absence of measurement for precision, such as confidence intervals, constituted the major limitations of this study. The availability of automated deep learning platforms provide an opportunity for the medical community to enhance their understanding in model development and evaluation. Although the derivation of classification models without requiring a deep understanding of the mathematical, statistical, and programming principles is attractive, comparable performance to expertly designed models is limited to more elementary classification tasks. Furthermore, care should be placed in adhering to ethical principles when using these automated models to avoid discrimination and causing harm. Future studies should compare several application programming interfaces on thoroughly curated datasets.
A Review on Machine Learning and Deep Learning Methods on Medical Image Classification
International Journal of Scientific Research in Computer Science, Engineering and Information Technology, 2024
Medical image classification, a critical component in medical diagnostics, has significantly advanced through the integration of machine learning (ML) and deep learning (DL) techniques. This review comprehensively explores the evolution, methodologies, and applications of ML and DL in medical image classification. Traditional ML methods, including support vector machines and decision trees, have provided a foundation for early advancements by utilizing handcrafted features. However, the advent of DL, particularly convolutional neural networks (CNNs), has revolutionized the field by enabling automatic feature extraction and achieving superior performance. This review examines various DL architectures, such as ResNet, VGG, and Inception, highlighting their contributions to tasks like tumor detection, organ segmentation, and disease classification. Furthermore, it addresses challenges like data scarcity, interpretability, and computational demands, discussing potential solutions like data augmentation, transfer learning, and model optimization. The review also considers the ethical implications and the need for robust validation to ensure clinical applicability. Through a comparative analysis of existing studies, this review underscores the transformative impact of ML and DL on medical imaging, emphasizing the continuous need for innovation and interdisciplinary collaboration to enhance diagnostic accuracy and patient outcomes.
Deep Learning Techniques to Classify and Analyze Medical Imaging Data
2019
In recent years, deep learning techniques particularly Convolutional Neural Networks (CNNs) have been used in various disciplines. CNNs have shown an essential ability to automatically extract large volumes of information from big data. The use of CNNs have significantly proved to be useful especially in classifying natural images. Nonetheless, there have been a major barrier in implementing the CNNs in medical domain due to lack of proper training data. As a result, general imaging benchmarks such as ImageNet have been popularly used in the medical domain even though they are not so perfect as compared to the CNNs. In this paper, a comparative analysis of LeNet, AlexNet and GoogLeNet have been done. Thereafter, the paper has proposed an improved conceptual framework for classifying medicinal anatomy images using CNNs. Based on the proposed design of the framework, the CNNs architecture is expected to outperform the previous three architectures in classifying medical images.
Code-free deep learning for multi-modality medical image classification
Nature Machine Intelligence
A number of large technology companies have created code-free cloud-based platforms that allow researchers and clinicians without coding experience to create deep learning algorithms. In this study, we comprehensively analyse the performance and featureset of six platforms, using four representative cross-sectional and en-face medical imaging datasets to create image classification models. The mean (s.d.) F1 scores across platforms for all model–dataset pairs were as follows: Amazon, 93.9 (5.4); Apple, 72.0 (13.6); Clarifai, 74.2 (7.1); Google, 92.0 (5.4); MedicMind, 90.7 (9.6); Microsoft, 88.6 (5.3). The platforms demonstrated uniformly higher classification performance with the optical coherence tomography modality. Potential use cases given proper validation include research dataset curation, mobile ‘edge models’ for regions without internet access, and baseline models against which to compare and iterate bespoke deep learning approaches.
Deep learning has huge potential to transform healthcare. However, significant expertise is required to train such models and this is a significant blocker for their translation into clinical practice. In this study, we therefore sought to evaluate the use of automated deep learning software to develop medical image diagnostic classifiers by healthcare professionals with limited coding - and no deep learning - expertise. We used five publicly available open-source datasets: (i) retinal fundus images (MESSIDOR); (ii) optical coherence tomography (OCT) images (Guangzhou Medical University/Shiley Eye Institute, Version 3); (iii) images of skin lesions (Human against Machine (HAM)10000) and (iv) both paediatric and adult chest X-ray (CXR) images (Guangzhou Medical University/Shiley Eye Institute, Version 3 and the National Institute of Health (NIH)14 dataset respectively) to separately feed into a neural architecture search framework that automatically developed a deep learning architec...
A Systematic Collection of Medical Image Datasets for Deep Learning
Cornell University - arXiv, 2021
The astounding success made by artificial intelligence (AI) in healthcare and other fields proves that AI can achieve human-like performance. However, success always comes with challenges. Deep learning algorithms are data-dependent and require large datasets for training. The lack of data in the medical imaging field creates a bottleneck for the application of deep learning to medical image analysis. Medical image acquisition, annotation, and analysis are costly, and their usage is constrained by ethical restrictions. They also require many resources, such as human expertise and funding. That makes it difficult for non-medical researchers to have access to useful and large medical data. Thus, as comprehensive as possible, this paper provides a collection of medical image datasets with their associated challenges for deep learning research. We have collected information of around three hundred datasets and challenges mainly reported between 2013 and 2020 and categorized them into four categories: head & neck, chest & abdomen, pathology & blood,
Exploration of Machine Learning and Deep Learning Approaches in Medical Domain
International Journal of Scientific Research in Computer Science, Engineering and Information Technology, 2024
Through the combination of machine learning (ML) and deep learning (DL) approaches, substantial progress has been made in the field of medical picture categorization, which is an essential component in the field of medical diagnostics. Within the context of medical picture categorization, this paper provides an in-depth examination of the development, methodology, and applications of machine learning and deep learning. By making use of handmade features, traditional machine learning techniques, such as support vector machines and decision trees, have laid the groundwork for early achievements in the field. On the other hand, the introduction of deep learning, and more specifically convolutional neural networks (CNNs), has brought about a revolution in the industry by making it possible to automatically extract features and obtaining greater performance. This article takes a look at a number of different deep learning architectures, including ResNet, VGG, and Inception, and highlights the contributions that these designs have made to tasks such as illness categorization, organ segmentation, and tumor identification. In addition to this, it discusses alternative solutions such as data augmentation, transfer learning, and model optimization after addressing problems such as the lack of data, the interpretability of the data, and the demands placed on the computing resources. In addition, the evaluation takes into account the ethical concerns, as well as the need for rigorous validation in order to guarantee clinical application. This study highlights the revolutionary influence that machine learning and deep learning have had on medical imaging by conducting a comparative analysis of current research. It also highlights the ongoing need for innovation and cooperation across disciplines in order to improve diagnostic accuracy and patient outcomes.
An overview of deep learning in medical imaging
2022
Machine learning (ML) has seen enormous consideration during the most recent decade. This success started in 2012 when an ML model accomplished a remarkable triumph in the ImageNet Classification, the world's most famous competition for computer vision. This model was a kind of convolutional neural system (CNN) called deep learning (DL). Since then, researchers have started to participate efficiently in DL's fastest developing area of research. These days, DL systems are cutting-edge ML systems spanning a broad range of disciplines, from human language processing to video analysis, and commonly used in the scholarly world and enterprise sector. Recent advances can bring tremendous improvement to the medical field. Improved and innovative methods for data processing, image analysis and can significantly improve the diagnostic technologies and medicinal services gradually. A quick review of current developments with relevant problems in the field of DL used for medical imaging...
AutoML Systems For Medical Imaging
arXiv (Cornell University), 2023
The integration of machine learning in medical image analysis can greatly enhance the quality of healthcare provided by physicians. The combination of human expertise and computerized systems can result in improved diagnostic accuracy. An automated machine learning approach simplifies the creation of custom image recognition models by utilizing neural architecture search and transfer learning techniques. Medical imaging techniques are used to non-invasively create images of internal organs and body parts for diagnostic and procedural purposes. This article aims to highlight the potential applications, strategies, and techniques of AutoML in medical imaging through theoretical and empirical evidence.
Hello World Deep Learning in Medical Imaging
Journal of digital imaging, 2018
There is recent popularity in applying machine learning to medical imaging, notably deep learning, which has achieved state-of-the-art performance in image analysis and processing. The rapid adoption of deep learning may be attributed to the availability of machine learning frameworks and libraries to simplify their use. In this tutorial, we provide a high-level overview of how to build a deep neural network for medical image classification, and provide code that can help those new to the field begin their informatics projects.