Araştırma Makalesi
BibTex RIS Kaynak Göster

AÇIKLANABİLİR YAPAY ZEKA YÖNTEMLERİYLE MR GÖRÜNTÜLERİNDEN BEYİN TÜMÖRÜ TESPİTİ

Yıl 2025, Cilt: 28 Sayı: 2, 1092 - 1109, 03.06.2025

Öz

Bu çalışma açıklanabilir yapay zeka yöntemleri kullanılarak MR görüntülerinden beyin tümörlerinin tespit edilmesini amaçlamaktadır. GradCAM, LIME ve Shapley görselleştirme yöntemleri CNN modellerine entegre edilerek, dört farklı beyin durumu (No Tumor, Glioma, Meningioma, Pituitary) sınıflandırılmıştır. GradCAM yöntemi modelin genel odaklanma alanlarını belirlerken, LIME modelin kararlarını detaylandırmış, Shapley ise modelin genel performansını ve eksikliklerini ortaya koymuştur. Çalışmada bu yöntemlerin birlikte kullanılması, model performansının artırılması için önemli bir yol gösterici olarak sunulmuştur.

Kaynakça

  • Aamir, M., Rahman, Z., Dayo, Z. A., Abro, W. A., Uddin, M. I., Khan, I., & Hu, Z. (2022). A deep learning approach for brain tumor classification using MRI images. Computers and Electrical Engineering, 101, 108-145. https://doi.org/10.1016/j.compeleceng.2022.108105
  • Abdusalomov, A. B., Mukhiddinov, M., & Whangbo, T. K. (2023). Brain tumor detection based on deep learning approaches and magnetic resonance imaging. Cancers, 15(16), 1-29. https://doi.org/10.3390/cancers15164172
  • Amin, K. H., Saleh, Z. S., & Deo, C. (2024). An explainable aı framework for artificial ıntelligence of medical things, https://arxiv.org/pdf/2403.04130 03.01.2025’de erişildi.
  • Angelov, P. P., Soares, E. A., Jiang, R., Arnold, N. I., & Atkinson, P. M. (2021). Explainable artificial intelligence: an analytical review. Wiley, 11(5), 1-13. https://doi.org/10.1002/widm.1424
  • Aslan, E. (2024). LSTM-ESA Hibrit modeli ile MR görüntülerinden beyin tümörünün sınıflandırılması. Adıyaman Üniversitesi Mühendislik Bilimleri Dergisi, 11(22), 63-81.
  • Baran, F. D. (2024). Belirli nöropsikolojik rahatsızlıkların yapay zeka temelli sınıflandırılması. Yüksek Lisans Tezi. Pamukkale Üniversitesi Fen Bilimleri Enstitüsü Bilgisayar Mühendisliği Anabilim Dalı, Denizli 134s.
  • Bilekyiğit, S. (2022). Kalp yetmezliği riskinin makine öğrenmesi yöntemleri ile analiz edilmesi. Yüksek Lisans Tezi. Karamanoğlu Mehmetbey Üniversitesi Fen Bilimleri Enstitüsü Mühendislik Bilimleri Anabilim Dalı, Karaman 152s.
  • Caelen, O. (2022). What is the Shapley value?.. https://medium.com/the-modern-scientist/what-is-the-shapley-value-8ca624274d5a 03.01.2025’de erişildi.
  • Chattopadhay, A., Sarkar, A., Howlader, P., & Balasubramanian, V. N. (2018). Grad-cam++: Generalized gradient-based visual explanations for deep convolutional networks. WACV, 839-847. https://doi.org/10.1109/WACV.2018.00097
  • Ellah, M. K., Awad, A. I., Khalaf, A. A., & Hamed, H. F. (2019). A review on brain tumor diagnosis from MRI images: Practical implications, key achievements, and lessons learned. Magnetic Resonance Imaging, 61, 300-318. https://doi.org/10.1016/j.mri.2019.05.028
  • Fatima, S. S., Wooldridge, M., & Jennings, N. R. (2008). A linear approximation method for the Shapley value. Artificial Intelligence, 172(14), 1673-1699. https://doi.org/10.1016/j.artint.2008.05.003
  • Garreau, D. ve Mardaoui, D. (2021). What does LIME really see in images? https://proceedings.mlr.press/v139/garreau21a/garreau21a.pdf 03.01.2025’de erişildi.
  • Gaur, L., Bhandari, M., Razdan, T., Mallik, S., & Zhao, Z. (2022). Examining the prediction of discrete subtypes of brain tumors with deep learning models 2022. file:///C:/Users/admin/AppData/Local/Microsoft/Windows/INetCache/IE/3DRHLJ80/fgene-13-822666[1].pdf 03.01.2025’de erişildi.
  • Gülle, K., Özdemir, D., & Temurtaş, H. (2024). Derin öğrenme yöntemleri kullanılarak böbrek hastalıklarının tespiti ve çoklu sınıflandırma. Eskişehir Türk Dünyası Uygulama ve Araştırma Merkezi Bilişim Dergisi, 5(1), 19-28.
  • Hassija, V., Chamola, V., Mahapatra, A., Singal, A., Goel, D., Huang, K., & Hussain, A. (2024). Interpreting black-box models: A review on explainable artificial intelligence. Cognitive Computation, 16(1), 45-74.
  • Juscafresa, A. (2022). An introduction to explainable artificial intelligence with LIME and SHAP.. https://diposit.ub.edu/dspace/bitstream/2445/192075/1/tfg_nieto_juscafresa_aleix.pdf 03.01.2025’de erişildi.
  • Kaggle. Datasets. (2025). https://www.kaggle.com/datasets 06.03.2025 ‘de erişildi.
  • Karakaya, A. (2024). Meme kanseri tahmininde makine öğrenmesi algoritmaları ve AutoML. Yüksek Lisans tezi. Pamukkale Üniversitesi Fen Bilimleri Enstitüsü Bilgisayar Mühendisliği Anabilim Dalı, Denizli 98s.
  • Khan, H. A., Jue, W., Mushtaq, M., & Mushtaq, M. U. (2020). Brain tumor classification in mrı image using convolutional neural network. Math. Biosci. Eng., 17, 6203-6216.
  • Kriegeskorte, N. (2015). Deep Neural Networks: A New Framework For Modeling Biological Vision And Brain İnformation Processing. Annual Review of Vision Science, 1(1), 417-446. https://doi.org/10.1146/annurev-vision-082114-035447
  • Kumar, S., Abdelhamid, A. A., & Tarek, Z. (2023). Visualizing the unseen: Exploring GRAD-CAM for interpreting convolutional image classifiers. Full Length Article, 4(1), 34-42. https://doi.org/10.54216/JAIM.040104
  • Manne, R. & Kantheti, S. C. (2021). Application of artificial intelligence in healthcare: chances and challenges. Current Journal of Applied Science and Technology, 40(6), 78-89.
  • Marmolejo, J. A. & Kose, U. (2024). Numerical Grad-Cam based explainable convolutional neural network for brain tumor diagnosis. Mobile Networks and Applications, 29(1), 109-118.
  • Nancy, A. M. & Sathyarajasekaran, K. (2024). Multi-modal explainability evaluation for brain tumor segmentation: Metrics MSFI. International Journal of Intelligent Systems and Applications in Engineering, 12, 341–347.
  • Orman, A. (2021). Brain Tumor Detection Via Explainable Convolutional Neural Networks. El-Cezeri Journal of Science and Engineering, 8(3), 1323-1337. http://doi.org/10.31202/ecjse.924446
  • Pannu, A. (2015). Artificial Intelligence And Its Application In Different Areas. Artificial Intelligence, 4(10), 79-84. Pillai, V. (2024). Enhancing Transparency And Understanding In AI Decision-Making Processes. Iconic Research and Engineering Journals, 8(1), 168-172.
  • Rahman, A. (2019). Statistics-Based Data Preprocessing Methods And Machine Learning Algorithms For Big Data Analysis. International Journal of Artificial Intelligence, 17(2), 44-65.
  • Reddy, S. (2018). Use of artificial intelligence in healthcare delivery. London: IntechOpen.
  • Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., & Batra, D. (2020). Grad-CAM: visual explanations from deep networks via gradient-based localization. International Journal of Computer Vision, 128, 336-359.
  • Singh, A., Sengupta, S., & Lakshminarayanan, V. (2020). Explainable deep learning models in medical image analysis. Journal of Imaging, 6(6), 1-19. https://doi.org/10.3390/jimaging6060052
  • Turay, T. & Vladimirova, T. (2022). Toward performing image classification and object detection with convolutional neural networks in autonomous driving systems: A survey. IEEE Access, 10, 14076-14119.
  • Verdinelli, I. & Wasserman, L. (2024). Feature importance: A closer look at shapley values and loco. Statistical Science, 39(4), 623-636. https://doi.org/10.1214/24-STS937
  • Vimbi, V., Shaffi, N., & Mahmud, M. (2024). Interpreting artificial intelligence models: A systematic review on the application of LIME and SHAP in Alzheimer’s disease detection. Brain Informatics, 11(1), 1-29.

BRAIN TUMOR DETECTION FROM MRI IMAGES WITH EXPLAINABLE ARTIFICIAL INTELLIGENCE METHODS

Yıl 2025, Cilt: 28 Sayı: 2, 1092 - 1109, 03.06.2025

Öz

In this study, the aim is to detect brain tumors from MR images using explainable artificial intelligence methods. GradCAM, LIME, and Shapley visualization methods were utilized as part of CNN models in the study. The classification in the model developed during the study was examined under four groups: No Tumor, Glioma, Meningioma, and Pituitary. As a result of the study, GradCAM proved effective in identifying the general focus areas of the model, LIME provided a detailed explanation of the model's decisions, and Shapley revealed the overall performance and shortcomings of the model. The combined use of these techniques enables the provision of more data or the implementation of necessary improvements to ensure the model works more reliably and effectively.

Kaynakça

  • Aamir, M., Rahman, Z., Dayo, Z. A., Abro, W. A., Uddin, M. I., Khan, I., & Hu, Z. (2022). A deep learning approach for brain tumor classification using MRI images. Computers and Electrical Engineering, 101, 108-145. https://doi.org/10.1016/j.compeleceng.2022.108105
  • Abdusalomov, A. B., Mukhiddinov, M., & Whangbo, T. K. (2023). Brain tumor detection based on deep learning approaches and magnetic resonance imaging. Cancers, 15(16), 1-29. https://doi.org/10.3390/cancers15164172
  • Amin, K. H., Saleh, Z. S., & Deo, C. (2024). An explainable aı framework for artificial ıntelligence of medical things, https://arxiv.org/pdf/2403.04130 03.01.2025’de erişildi.
  • Angelov, P. P., Soares, E. A., Jiang, R., Arnold, N. I., & Atkinson, P. M. (2021). Explainable artificial intelligence: an analytical review. Wiley, 11(5), 1-13. https://doi.org/10.1002/widm.1424
  • Aslan, E. (2024). LSTM-ESA Hibrit modeli ile MR görüntülerinden beyin tümörünün sınıflandırılması. Adıyaman Üniversitesi Mühendislik Bilimleri Dergisi, 11(22), 63-81.
  • Baran, F. D. (2024). Belirli nöropsikolojik rahatsızlıkların yapay zeka temelli sınıflandırılması. Yüksek Lisans Tezi. Pamukkale Üniversitesi Fen Bilimleri Enstitüsü Bilgisayar Mühendisliği Anabilim Dalı, Denizli 134s.
  • Bilekyiğit, S. (2022). Kalp yetmezliği riskinin makine öğrenmesi yöntemleri ile analiz edilmesi. Yüksek Lisans Tezi. Karamanoğlu Mehmetbey Üniversitesi Fen Bilimleri Enstitüsü Mühendislik Bilimleri Anabilim Dalı, Karaman 152s.
  • Caelen, O. (2022). What is the Shapley value?.. https://medium.com/the-modern-scientist/what-is-the-shapley-value-8ca624274d5a 03.01.2025’de erişildi.
  • Chattopadhay, A., Sarkar, A., Howlader, P., & Balasubramanian, V. N. (2018). Grad-cam++: Generalized gradient-based visual explanations for deep convolutional networks. WACV, 839-847. https://doi.org/10.1109/WACV.2018.00097
  • Ellah, M. K., Awad, A. I., Khalaf, A. A., & Hamed, H. F. (2019). A review on brain tumor diagnosis from MRI images: Practical implications, key achievements, and lessons learned. Magnetic Resonance Imaging, 61, 300-318. https://doi.org/10.1016/j.mri.2019.05.028
  • Fatima, S. S., Wooldridge, M., & Jennings, N. R. (2008). A linear approximation method for the Shapley value. Artificial Intelligence, 172(14), 1673-1699. https://doi.org/10.1016/j.artint.2008.05.003
  • Garreau, D. ve Mardaoui, D. (2021). What does LIME really see in images? https://proceedings.mlr.press/v139/garreau21a/garreau21a.pdf 03.01.2025’de erişildi.
  • Gaur, L., Bhandari, M., Razdan, T., Mallik, S., & Zhao, Z. (2022). Examining the prediction of discrete subtypes of brain tumors with deep learning models 2022. file:///C:/Users/admin/AppData/Local/Microsoft/Windows/INetCache/IE/3DRHLJ80/fgene-13-822666[1].pdf 03.01.2025’de erişildi.
  • Gülle, K., Özdemir, D., & Temurtaş, H. (2024). Derin öğrenme yöntemleri kullanılarak böbrek hastalıklarının tespiti ve çoklu sınıflandırma. Eskişehir Türk Dünyası Uygulama ve Araştırma Merkezi Bilişim Dergisi, 5(1), 19-28.
  • Hassija, V., Chamola, V., Mahapatra, A., Singal, A., Goel, D., Huang, K., & Hussain, A. (2024). Interpreting black-box models: A review on explainable artificial intelligence. Cognitive Computation, 16(1), 45-74.
  • Juscafresa, A. (2022). An introduction to explainable artificial intelligence with LIME and SHAP.. https://diposit.ub.edu/dspace/bitstream/2445/192075/1/tfg_nieto_juscafresa_aleix.pdf 03.01.2025’de erişildi.
  • Kaggle. Datasets. (2025). https://www.kaggle.com/datasets 06.03.2025 ‘de erişildi.
  • Karakaya, A. (2024). Meme kanseri tahmininde makine öğrenmesi algoritmaları ve AutoML. Yüksek Lisans tezi. Pamukkale Üniversitesi Fen Bilimleri Enstitüsü Bilgisayar Mühendisliği Anabilim Dalı, Denizli 98s.
  • Khan, H. A., Jue, W., Mushtaq, M., & Mushtaq, M. U. (2020). Brain tumor classification in mrı image using convolutional neural network. Math. Biosci. Eng., 17, 6203-6216.
  • Kriegeskorte, N. (2015). Deep Neural Networks: A New Framework For Modeling Biological Vision And Brain İnformation Processing. Annual Review of Vision Science, 1(1), 417-446. https://doi.org/10.1146/annurev-vision-082114-035447
  • Kumar, S., Abdelhamid, A. A., & Tarek, Z. (2023). Visualizing the unseen: Exploring GRAD-CAM for interpreting convolutional image classifiers. Full Length Article, 4(1), 34-42. https://doi.org/10.54216/JAIM.040104
  • Manne, R. & Kantheti, S. C. (2021). Application of artificial intelligence in healthcare: chances and challenges. Current Journal of Applied Science and Technology, 40(6), 78-89.
  • Marmolejo, J. A. & Kose, U. (2024). Numerical Grad-Cam based explainable convolutional neural network for brain tumor diagnosis. Mobile Networks and Applications, 29(1), 109-118.
  • Nancy, A. M. & Sathyarajasekaran, K. (2024). Multi-modal explainability evaluation for brain tumor segmentation: Metrics MSFI. International Journal of Intelligent Systems and Applications in Engineering, 12, 341–347.
  • Orman, A. (2021). Brain Tumor Detection Via Explainable Convolutional Neural Networks. El-Cezeri Journal of Science and Engineering, 8(3), 1323-1337. http://doi.org/10.31202/ecjse.924446
  • Pannu, A. (2015). Artificial Intelligence And Its Application In Different Areas. Artificial Intelligence, 4(10), 79-84. Pillai, V. (2024). Enhancing Transparency And Understanding In AI Decision-Making Processes. Iconic Research and Engineering Journals, 8(1), 168-172.
  • Rahman, A. (2019). Statistics-Based Data Preprocessing Methods And Machine Learning Algorithms For Big Data Analysis. International Journal of Artificial Intelligence, 17(2), 44-65.
  • Reddy, S. (2018). Use of artificial intelligence in healthcare delivery. London: IntechOpen.
  • Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., & Batra, D. (2020). Grad-CAM: visual explanations from deep networks via gradient-based localization. International Journal of Computer Vision, 128, 336-359.
  • Singh, A., Sengupta, S., & Lakshminarayanan, V. (2020). Explainable deep learning models in medical image analysis. Journal of Imaging, 6(6), 1-19. https://doi.org/10.3390/jimaging6060052
  • Turay, T. & Vladimirova, T. (2022). Toward performing image classification and object detection with convolutional neural networks in autonomous driving systems: A survey. IEEE Access, 10, 14076-14119.
  • Verdinelli, I. & Wasserman, L. (2024). Feature importance: A closer look at shapley values and loco. Statistical Science, 39(4), 623-636. https://doi.org/10.1214/24-STS937
  • Vimbi, V., Shaffi, N., & Mahmud, M. (2024). Interpreting artificial intelligence models: A systematic review on the application of LIME and SHAP in Alzheimer’s disease detection. Brain Informatics, 11(1), 1-29.
Toplam 33 adet kaynakça vardır.

Ayrıntılar

Birincil Dil Türkçe
Konular Yapay Zeka (Diğer)
Bölüm Bilgisayar Mühendisliği
Yazarlar

Muhammet Doğukan İli 0009-0006-1518-0720

Fatih Özyurt 0000-0002-8154-6691

Yayımlanma Tarihi 3 Haziran 2025
Gönderilme Tarihi 7 Ocak 2025
Kabul Tarihi 18 Nisan 2025
Yayımlandığı Sayı Yıl 2025Cilt: 28 Sayı: 2

Kaynak Göster

APA İli, M. D., & Özyurt, F. (2025). AÇIKLANABİLİR YAPAY ZEKA YÖNTEMLERİYLE MR GÖRÜNTÜLERİNDEN BEYİN TÜMÖRÜ TESPİTİ. Kahramanmaraş Sütçü İmam Üniversitesi Mühendislik Bilimleri Dergisi, 28(2), 1092-1109.