Research Article

HAYCAM VS EIGENCAM FOR WEAKLY-SUPERVISED OBJECT DETECTION ACROSS VARYING SCALES

Volume: 27 Number: 3 September 3, 2024
TR EN

HAYCAM VS EIGENCAM FOR WEAKLY-SUPERVISED OBJECT DETECTION ACROSS VARYING SCALES

Abstract

When a classification process is performed using Class Activation Maps, which is one of the Explainable Artificial Intelligence approaches, the areas influencing the classification on the input image can be revealed. In other words, it is demonstrated which part of the image the classifier model looks at to make a decision. In this study, a 200-class classification model was trained using the open-source dataset CUB 200 2011, and the classification results were visualized using the EigenCAM and HayCAM methods. When comparing object detection performances based on the areas influencing classification, the EigenCAM method reaches an IoU (Intersection over Union) value of 30.88%, while the HayCAM method reaches a value of 41.95%. The obtained results indicate that outputs derived using Principal Component Analysis (HayCAM) are better than those obtained using Singular Value Decomposition (EigenCAM).

Keywords

Supporting Institution

Huawei Türkiye R&D Center

Ethical Statement

The paper reflects the authors' own research and analysis in a truthful and complete manner.

Thanks

Huawei Türkiye R&D Center

References

  1. Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., & Barbado, A. (2020). Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115. https://doi.org/10.1016/j.inffus.2019.12.012
  2. Chattopadhay, A., Sarkar, A., Howlader, P., & Balasubramanian, V. N. (2018). Grad-CAM++: Generalized gradient-based visual explanations for deep convolutional networks. In 2018 IEEE Winter Conference on Applications of Computer Vision (WACV) (pp. 839–847). https://doi.org/10.1109/WACV.2018.00097
  3. He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the ieee conference on computer vision and pattern recognition (pp. 770–778). https://doi.org/10.1109/CVPR.2016.90
  4. Kornblith, S., Shlens, J., & Le, Q. V. (2019). Do better imagenet models transfer better? In Proceedings of the ieee conference on computer vision and pattern recognition (pp. 2661–2671). https://doi.org/10.1109/CVPR.2019.00277
  5. Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25. https://doi.org/10.1145/3065386
  6. Muhammad, M. B., & Yeasin, M. (2020). Eigen-cam: Class activation map using principal components. In 2020 international joint conference on neural networks (ijcnn) (pp. 1–7). https://doi.org/10.1109/IJCNN48605.2020.9206626
  7. Ornek. (2023). Developing a new explainable artificial intelligence method (doctoral dissertation). Konya Technical University. (No DOI available for the dissertation)
  8. Ornek, A., & Ceylan, M. (2022). Haycam: A novel visual explanation for deep convolutional neural networks. Traitement Du Signal, 39 (5), 1711–1719. https://doi.org/10.18280/ts.390529

Details

Primary Language

English

Subjects

Computer Vision , Image Processing , Deep Learning

Journal Section

Research Article

Publication Date

September 3, 2024

Submission Date

February 2, 2024

Acceptance Date

July 22, 2024

Published in Issue

Year 2024 Volume: 27 Number: 3

APA
Ornek, A., & Ceylan, M. (2024). HAYCAM VS EIGENCAM FOR WEAKLY-SUPERVISED OBJECT DETECTION ACROSS VARYING SCALES. Kahramanmaraş Sütçü İmam Üniversitesi Mühendislik Bilimleri Dergisi, 27(3), 1078-1088. https://doi.org/10.17780/ksujes.1430479