(original) (raw)

Abstract

Explainable AI (XAI) has emerged as a pivotal tool in healthcare diagnostics, offering much-needed transparency and interpretability in complex AI models. XAI techniques, such as SHAP, Grad-CAM, and LIME, enable clinicians to understand AI-driven decisions, fostering greater trust and collaboration between human and machine in clinical settings. This review explores the key benefits of XAI in enhancing diagnostic accuracy, personalizing patient care, and ensuring compliance with regulatory standards. However, despite its advantages, XAI faces significant challenges, including balancing model accuracy with interpretability, scaling for real-time clinical use, and mitigating biases inherent in medical data. Ethical concerns, particularly surrounding fairness and accountability, are also discussed in relation to AI's growing role in healthcare. The review emphasizes the importance of developing hybrid models that combine high accuracy with improved interpretability and suggests that future research should focus on explainable-by-design systems, reducing computational costs, and addressing ethical issues. As AI continues to integrate into healthcare, XAI will play an essential role in ensuring that AI systems are transparent, accountable, and aligned with the ethical standards required in clinical practice.

Subject: Computer Science and Mathematics - Computer Science

Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.