Interpretable Deep Learning Framework for COVID-19 Detection: Grad-CAM Integration with Pre-trained CNN Models on Chest X-Ray Images

Authors

DOI:

https://doi.org/10.32628/IJSRSET25121158

Keywords:

Explainable AI, Grad-CAM, COVID-19 Diagnosis, EfficientNet, ResNet, VGG

Abstract

This study is present a novel approach for interpretability enhancing of the deep learning models (EfficientNet, ResNet, VGG) that applied to COVID-19 diagnosis by using the Gradient-Weighted Class Activation Mapping (Grad-CAM) all that to make transparent decision-making improved. To do this we leveraging the capabilities of Grad-CAM, and we aim to provide not only accurate diagnostic predictions but also give a visual explanations, that support the professionals in the healthcare to understanding the underlying features that aided to the model’s decisions. This interpretability is important for building trust in the AI systems, especially in medical areas diagnosis that critical such. This interpretability is essential for building trust in the AI systems, especially in critical areas such as medical diagnosis, that is allowing healthcare professionals to understand the rationale behind the AI-generated recommendations and decisions. In the context of COVID-19, using techniques like Gradient-Weighted Class Activation Mapping (Grad-CAM) can provide insights into which features of medical imaging data contribute most significantly to the model’s predictions, this enhancing reliability and transparency of the AI system. This capability not only aids clinicians in understanding the rationale behind AI-driven diagnoses but it is also fosters greater trust in the automated systems, especially in high-stakes scenarios like healthcare. It is crucial this transparency is ensuring that healthcare professionals can make informed decisions based on the AI’s outputs. As the COVID-19 pandemic demonstrated, timely and accurate diagnosis is an essential for the effective patient management.

Downloads

Download data is not yet available.

References

P. Meddage, I. Ekanayake, U. S. Perera, H. M. Azamathulla, M. A. Md Said, and U. Rathnayake, “Interpretation of Machine-Learning-Based (Black-box) Wind Pressure Predictions for Low-Rise Gable-Roofed Buildings Using Shapley Additive Explanations (SHAP),” Buildings, vol. 12, no. 6, p. 734, May 2022, doi: 10.3390/buildings12060734. DOI: https://doi.org/10.3390/buildings12060734

V. Tucci, J. Saary, and T. E. Doyle, “Factors influencing trust in medical artificial intelligence for healthcare professionals: a narrative review,” J. Med. Artif. Intell., vol. 5, no. November 2021, pp. 4–4, Mar. 2022, doi: 10.21037/jmai-21-25. DOI: https://doi.org/10.21037/jmai-21-25

K. Goel, R. Sindhgatta, S. Kalra, R. Goel, and P. Mutreja, “The effect of machine learning explanations on user trust for automated diagnosis of COVID-19,” Comput. Biol. Med., vol. 146, no. January, p. 105587, Jul. 2022, doi: 10.1016/j.compbiomed.2022.105587. DOI: https://doi.org/10.1016/j.compbiomed.2022.105587

W. Sun, J. Zhang, Z. Liu, Y. Zhong, and N. Barnes, “GETAM: Gradient-weighted Element-wise Transformer Attention Map for Weakly-supervised Semantic segmentation,” Dec. 2021, [Online]. Available: http://arxiv.org/abs/2112.02841

X. Zhang, J. Zhou, W. Sun, and S. Kumar Jha, “A Lightweight CNN Based on Transfer Learning for COVID-19 Diagnosis,” Comput. Mater. Contin., vol. 72, no. 1, pp. 1123–1137, 2022, doi: 10.32604/cmc.2022.024589. DOI: https://doi.org/10.32604/cmc.2022.024589

S. Xie, Z. Yu, and Z. Lv, “Multi-Disease Prediction Based on Deep Learning: A Survey,” Comput. Model. Eng. Sci., vol. 128, no. 2, pp. 489–522, 2021, doi: 10.32604/cmes.2021.016728. DOI: https://doi.org/10.32604/cmes.2021.016728

G. Pozzi, “Automated opioid risk scores: a case for machine learning-induced epistemic injustice in healthcare,” Ethics Inf. Technol., vol. 25, no. 1, p. 3, Mar. 2023, doi: 10.1007/s10676-023-09676-z. DOI: https://doi.org/10.1007/s10676-023-09676-z

Y. Liang, S. Li, C. Yan, M. Li, and C. Jiang, “Explaining The Black-Box Model: A Survey of Local Interpretation Methods for Deep Neural Networks A R T I C L E I N F O Keywords: Interpretable machine learning Black-box models Transparent models Deep learning Explainable artificial intelligence,” vol. 4131, pp. 0–2, 2020, [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0925231220312716

V. Buhrmester, D. Münch, and M. Arens, “Analysis of Explainers of Black Box Deep Neural Networks for Computer Vision: A Survey,” Mach. Learn. Knowl. Extr., vol. 3, no. 4, pp. 966–989, Dec. 2021, doi: 10.3390/make3040048. DOI: https://doi.org/10.3390/make3040048

L.-D. Quach, K. N. Quoc, A. N. Quynh, N. Thai-Nghe, and T. G. Nguyen, “Explainable Deep Learning Models With Gradient-Weighted Class Activation Mapping for Smart Agriculture,” IEEE Access, vol. 11, no. July, pp. 83752–83762, 2023, doi: 10.1109/ACCESS.2023.3296792. DOI: https://doi.org/10.1109/ACCESS.2023.3296792

T. Hussain and H. Shouno, “Explainable Deep Learning Approach for Multi-Class Brain Magnetic Resonance Imaging Tumor Classification and Localization Using Gradient-Weighted Class Activation Mapping,” Information, vol. 14, no. 12, p. 642, Nov. 2023, doi: 10.3390/info14120642. DOI: https://doi.org/10.3390/info14120642

Y.-H. Li, Y.-L. Li, M.-Y. Wei, and G.-Y. Li, “Innovation and challenges of artificial intelligence technology in personalized healthcare,” Sci. Rep., vol. 14, no. 1, p. 18994, Aug. 2024, doi: 10.1038/s41598-024-70073-7. DOI: https://doi.org/10.1038/s41598-024-70073-7

J. K. Ruffle et al., “Constipation Predominant Irritable Bowel Syndrome and Functional Constipation Are Not Discrete Disorders: A Machine Learning Approach,” Am. J. Gastroenterol., vol. 116, no. 1, pp. 142–151, Jan. 2021, doi: 10.14309/ajg.0000000000000816. DOI: https://doi.org/10.14309/ajg.0000000000000816

L. Zhong, D. Lopez, S. Pei, and J. Gao, “Healthcare system resilience and adaptability to pandemic disruptions in the United States,” Nat. Med., vol. 30, no. 8, pp. 2311–2319, Aug. 2024, doi: 10.1038/s41591-024-03103-6. DOI: https://doi.org/10.1038/s41591-024-03103-6

S. Siddiqui et al., “Deep Learning Models for the Diagnosis and Screening of COVID-19: A Systematic Review,” SN Comput. Sci., vol. 3, no. 5, p. 397, Jul. 2022, doi: 10.1007/s42979-022-01326-3. DOI: https://doi.org/10.1007/s42979-022-01326-3

L. Nguyen, “THE TRANSFORMATIVE POTENTIAL OF DEEP LEARNING IN REVOLUTIONIZING MEDICAL DIAGNOSIS,” World J. Eng. Res., vol. 2, no. 3, 2024, doi: 10.61784/wjer3007. DOI: https://doi.org/10.61784/wjer3007

F. Alshehri and A. Rahman, “Coupling Machine and Deep Learning with Explainable Artificial Intelligence for Improving Prediction of Groundwater Quality and Decision-Making in Arid Region, Saudi Arabia,” Water, vol. 15, no. 12, p. 2298, Jun. 2023, doi: 10.3390/w15122298. DOI: https://doi.org/10.3390/w15122298

M. Yari Kalashgrani and A. Babapoor, “Application of nano-antibiotics in the diagnosis and treatment of infectious diseases,” vol. 3, pp. 22–35, 2022, doi: 10.47227/AANBT/3(1)35.

K. R. Bhatele et al., “COVID-19 Detection: A Systematic Review of Machine and Deep Learning-Based Approaches Utilizing Chest X-Rays and CT Scans,” Cognit. Comput., vol. 16, no. 4, pp. 1889–1926, Jul. 2024, doi: 10.1007/s12559-022-10076-6. DOI: https://doi.org/10.1007/s12559-022-10076-6

V. Lamprou, A. Kallipolitis, and I. Maglogiannis, “On the evaluation of deep learning interpretability methods for medical images under the scope of faithfulness,” Comput. Methods Programs Biomed., vol. 253, p. 108238, Aug. 2024, doi: 10.1016/j.cmpb.2024.108238. DOI: https://doi.org/10.1016/j.cmpb.2024.108238

Y. Wan, H. Zhou, and X. Zhang, “An Interpretation Architecture for Deep Learning Models with the Application of COVID-19 Diagnosis,” Entropy, vol. 23, no. 2, p. 204, Feb. 2021, doi: 10.3390/e23020204. DOI: https://doi.org/10.3390/e23020204

A. Dugăeșescu and A. M. Florea, “Evaluation of Class Activation Methods for Understanding Image Classification Tasks,” in 2022 24th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC), IEEE, Sep. 2022, pp. 165–172. doi: 10.1109/SYNASC57785.2022.00033. DOI: https://doi.org/10.1109/SYNASC57785.2022.00033

F. Meng et al., “AI support for accurate and fast radiological diagnosis of COVID-19: an international multicenter, multivendor CT study,” Eur. Radiol., vol. 33, no. 6, pp. 4280–4291, Dec. 2022, doi: 10.1007/s00330-022-09335-9. DOI: https://doi.org/10.1007/s00330-022-09335-9

A. M. Antoniadi et al., “Current Challenges and Future Opportunities for XAI in Machine Learning-Based Clinical Decision Support Systems: A Systematic Review,” Appl. Sci., vol. 11, no. 11, p. 5088, May 2021, doi: 10.3390/app11115088. DOI: https://doi.org/10.3390/app11115088

J. P. Cohen, P. Morrison, and L. Dao, “COVID-19 Image Data Collection.” [Online]. Available: http://arxiv.org/abs/2003.11597

Downloads

Published

31-01-2025

Issue

Section

Research Articles

How to Cite

[1]
Ammar A. Ali, “Interpretable Deep Learning Framework for COVID-19 Detection: Grad-CAM Integration with Pre-trained CNN Models on Chest X-Ray Images”, Int J Sci Res Sci Eng Technol, vol. 12, no. 1, pp. 153–163, Jan. 2025, doi: 10.32628/IJSRSET25121158.

Similar Articles

1-10 of 109

You may also start an advanced similarity search for this article.