Araştırma Makalesi
BibTex RIS Kaynak Göster

EDM: SÜRÜŞ VIDEOLARINDA ARAÇ HAREKET ALGILAMASI IÇIN YENI BIR EĞIK-DERIN-MIMARI

Yıl 2024, , 92 - 104, 03.03.2024
https://doi.org/10.17780/ksujes.1358512

Öz

Çarpışma önleme mekanizmaları otonom araçlar alanındaki çalışmalar için önemli bir konudur. Araçların hareket açılarından çarpışma hakkında önceden bilgi alma imkanımız olmaktadır. Bu nedenle hareket halindeki araçların hareket açılarının öğrenilmesi önemli bir konudur. Çalışmada çarpışma uyarı sistemlerine temel oluşturmak amacıyla araçların yatay hareket açılarını öğrenen bir mimari model geliştirilmiştir. Başarılı derin öğrenme mimarilerinden biri olan YOLOv3 geliştirilerek elde edilen yeni mimari hareket profilleri üzerinde kullanıldı. Öğrenilen açı değerleri sayesinde sınırlayıcı kutular da, hareket profillerindeki izlerle tam olarak eşleşmektedir. Elde edilen sonuçlar %79 mAP değerine ve 36 FPS çalışma hızına sahiptir. Bu sonuçlar, saf YOLOv3 mimarisinin hareket profilleri üzerinde eğitildiklerinde elde edilen sonuçlardan daha iyidir. Yeni mimarinin hareket profillerinde kullanılması ile görüntüdeki gürültü, kötü hava gibi etkenler sonuçlarımızı olumsuz etkilememektedir. Bu özellikleri ile çarpışma önleyici sistemler için önemli bir adım atılmıştır.

Proje Numarası

122E586

Kaynakça

  • Behrendt, K., Novak, L., & Botros, R. (2017, May). A deep learning approach to traffic lights: Detection, tracking, and classification. In 2017 IEEE International Conference on Robotics and Automation (ICRA) (pp. 1370-1377). IEEE. https://doi.org/10.1109/icra.2017.7989163
  • Cadieu, C., & Olshausen, B. (2008). Learning transformational invariants from natural movies. Advances in neural information processing systems, 21.
  • Cao, Z., Simon, T., Wei, S. E., & Sheikh, Y. (2017). Realtime multi-person 2d pose estimation using part affinity fields. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 7291-7299). https://doi.org/10.1109/cvpr.2017.143
  • Caraffi, C., Vojíř, T., Trefný, J., Šochman, J., & Matas, J. (2012, September). A system for real-time detection and tracking of vehicles from a single car-mounted camera. In 2012 15th international IEEE conference on intelligent transportation systems (pp. 975-982). IEEE. https://doi.org/10.1109/itsc.2012.6338748
  • Chen, L., Peng, X., & Ren, M. (2018). Recurrent metric networks and batch multiple hypothesis for multi-object tracking. IEEE Access, 7, 3093-3105. https://doi.org/10.1109/access.2018.2889187
  • Gordon, D., Farhadi, A., & Fox, D. (2018). Re3: Real-time recurrent regression networks for visual tracking of generic objects. IEEE Robotics and Automation Letters, 3(2), 788-795. https://doi.org/10.1109/lra.2018.2792152
  • Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural computation, 9(8), 1735-1780. https://doi.org/10.1162/neco.1997.9.8.1735
  • Hui, J. (2018). Real-time object detection with yolo, yolov2, and now yolov3. Available online: medium. com/@ jonathan_hui/real-time-object-detection-with-YOLO-YOLOv2-28b1b93e2088 (accessed on 24 February 2019). https://doi.org/10.22214/ijraset.2021.39044
  • Jazayeri, A., Cai, H., Zheng, J. Y., & Tuceryan, M. (2011). Vehicle detection and tracking in-car video based on motion model. IEEE Transactions on Intelligent Transportation Systems, 12(2), 583-595. https://doi.org/10.1109/tits.2011.2113340
  • John, V., & Mita, S. (2019). Vehicle semantic understanding for automated driving in multiple-lane urban roads using deep vision-based features. In International Joint Conferences on Artificial Intelligence; Macao, China (pp. 1-7).
  • Kilicarslan, M., & Temel, T. (2022). Motion-aware vehicle detection in driving videos. Turkish Journal of Electrical Engineering and Computer Sciences, 30(1), 63-78. https://doi.org/10.3906/elk-2101-93
  • Kilicarslan, M., & Zheng, J. Y. (2018). Predict vehicle collision by TTC from motion using a single video camera. IEEE Transactions on Intelligent Transportation Systems, 20(2), 522-533. https://doi.org/10.1109/tits.2018.2819827
  • Li, L., Zhou, Z., Wang, B., Miao, L., & Zong, H. (2020). A novel CNN-based method for accurate ship detection in HR optical remote sensing images via rotated bounding box. IEEE Transactions on Geoscience and Remote Sensing, 59(1), 686-699. https://doi.org/10.1109/tgrs.2020.2995477
  • Liang, Y., & Zhou, Y. (2018, October). LSTM multiple object tracker combining multiple cues. In 2018 25th IEEE International Conference on Image Processing (ICIP) (pp. 2351-2355). IEEE. https://doi.org/10.1109/icip.2018.8451739
  • Liu, Y., Lu, Y., Shi, Q., & Ding, J. (2013, December). Optical flow-based urban road vehicle tracking. In 2013 ninth international conference on computational intelligence and security (pp. 391-395). IEEE. https://doi.org/10.1109/cis.2013.89
  • Muehlemann, A. (2019). TrainYourOwnYOLO: Building a Custom Object Detector from Scratch. Disponible on-line: https://github. com/AntonMu/TrainYourOwnYOLO (Accedido Diciembre 2020). https://doi.org/10.5281/zenodo.5112375
  • Wang, L., Pham, N. T., Ng, T. T., Wang, G., Chan, K. L., & Leman, K. (2014, October). Learning deep features for multiple object tracking by using a multi-task learning strategy. In 2014 IEEE International Conference on Image Processing (ICIP) (pp. 838-842). IEEE. https://doi.org/10.1109/icip.2014.7025168
  • Yun, W. J., Park, S., Kim, J., & Mohaisen, D. (2022). Self-Configurable Stabilized Real-Time Detection Learning for Autonomous Driving Applications. IEEE Transactions on Intelligent Transportation Systems. https://doi.org/10.1109/tits.2022.3211326
  • Zhang, D., Maei, H., Wang, X., & Wang, Y. F. (2017). Deep reinforcement learning for visual object tracking in videos. arXiv preprint arXiv:1701.08936. https://doi.org/10.48550/arXiv.1701.08936
  • Zhou, H., Ouyang, W., Cheng, J., Wang, X., & Li, H. (2018). Deep continuous conditional random fields with asymmetric inter-object constraints for online multi-object tracking. IEEE Transactions on Circuits and Systems for Video Technology, 29(4), 1011-1022. https://doi.org/10.1109/tcsvt.2018.2825679

SDA: A NOVEL SKEWED-DEEP-ARCHITECTURE FOR VEHICLE MOTION DETECTION IN DRIVING VIDEOS

Yıl 2024, , 92 - 104, 03.03.2024
https://doi.org/10.17780/ksujes.1358512

Öz

Collision avoidance mechanisms are important topics for studies in the field of autonomous vehicles. We could obtain prior information about the collision from the movement angles of vehicles. Therefore, it is important issue to learn the movement angles of vehicles in motion. In the study, an architectural model is developed that learns the horizontal movement angles of vehicles to form a base for collision warning systems. YOLOv3 is modified and used on motion profiles. Thanks to the learned angle values, also the bounding boxes match the traces in the motion profiles smoothly. The results obtained have a mAP value of 79% and an operating speed of 36 FPS. These results are better than when trained on motion profiles of the YOLOv3 architecture. In addition, the use of the new architecture on motion profiles and factors such as noise and bad weather in the image do not adversely affect the results. With these features, a fundamental step has been taken for anti-collision systems.

Destekleyen Kurum

Scientific and Technological Research Council of Turkey (TÜBİTAK)

Proje Numarası

122E586

Teşekkür

This work was supported by the Scientific and Technological Research Council of Turkey (TÜBİTAK) with projects numbered 122E586.

Kaynakça

  • Behrendt, K., Novak, L., & Botros, R. (2017, May). A deep learning approach to traffic lights: Detection, tracking, and classification. In 2017 IEEE International Conference on Robotics and Automation (ICRA) (pp. 1370-1377). IEEE. https://doi.org/10.1109/icra.2017.7989163
  • Cadieu, C., & Olshausen, B. (2008). Learning transformational invariants from natural movies. Advances in neural information processing systems, 21.
  • Cao, Z., Simon, T., Wei, S. E., & Sheikh, Y. (2017). Realtime multi-person 2d pose estimation using part affinity fields. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 7291-7299). https://doi.org/10.1109/cvpr.2017.143
  • Caraffi, C., Vojíř, T., Trefný, J., Šochman, J., & Matas, J. (2012, September). A system for real-time detection and tracking of vehicles from a single car-mounted camera. In 2012 15th international IEEE conference on intelligent transportation systems (pp. 975-982). IEEE. https://doi.org/10.1109/itsc.2012.6338748
  • Chen, L., Peng, X., & Ren, M. (2018). Recurrent metric networks and batch multiple hypothesis for multi-object tracking. IEEE Access, 7, 3093-3105. https://doi.org/10.1109/access.2018.2889187
  • Gordon, D., Farhadi, A., & Fox, D. (2018). Re3: Real-time recurrent regression networks for visual tracking of generic objects. IEEE Robotics and Automation Letters, 3(2), 788-795. https://doi.org/10.1109/lra.2018.2792152
  • Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural computation, 9(8), 1735-1780. https://doi.org/10.1162/neco.1997.9.8.1735
  • Hui, J. (2018). Real-time object detection with yolo, yolov2, and now yolov3. Available online: medium. com/@ jonathan_hui/real-time-object-detection-with-YOLO-YOLOv2-28b1b93e2088 (accessed on 24 February 2019). https://doi.org/10.22214/ijraset.2021.39044
  • Jazayeri, A., Cai, H., Zheng, J. Y., & Tuceryan, M. (2011). Vehicle detection and tracking in-car video based on motion model. IEEE Transactions on Intelligent Transportation Systems, 12(2), 583-595. https://doi.org/10.1109/tits.2011.2113340
  • John, V., & Mita, S. (2019). Vehicle semantic understanding for automated driving in multiple-lane urban roads using deep vision-based features. In International Joint Conferences on Artificial Intelligence; Macao, China (pp. 1-7).
  • Kilicarslan, M., & Temel, T. (2022). Motion-aware vehicle detection in driving videos. Turkish Journal of Electrical Engineering and Computer Sciences, 30(1), 63-78. https://doi.org/10.3906/elk-2101-93
  • Kilicarslan, M., & Zheng, J. Y. (2018). Predict vehicle collision by TTC from motion using a single video camera. IEEE Transactions on Intelligent Transportation Systems, 20(2), 522-533. https://doi.org/10.1109/tits.2018.2819827
  • Li, L., Zhou, Z., Wang, B., Miao, L., & Zong, H. (2020). A novel CNN-based method for accurate ship detection in HR optical remote sensing images via rotated bounding box. IEEE Transactions on Geoscience and Remote Sensing, 59(1), 686-699. https://doi.org/10.1109/tgrs.2020.2995477
  • Liang, Y., & Zhou, Y. (2018, October). LSTM multiple object tracker combining multiple cues. In 2018 25th IEEE International Conference on Image Processing (ICIP) (pp. 2351-2355). IEEE. https://doi.org/10.1109/icip.2018.8451739
  • Liu, Y., Lu, Y., Shi, Q., & Ding, J. (2013, December). Optical flow-based urban road vehicle tracking. In 2013 ninth international conference on computational intelligence and security (pp. 391-395). IEEE. https://doi.org/10.1109/cis.2013.89
  • Muehlemann, A. (2019). TrainYourOwnYOLO: Building a Custom Object Detector from Scratch. Disponible on-line: https://github. com/AntonMu/TrainYourOwnYOLO (Accedido Diciembre 2020). https://doi.org/10.5281/zenodo.5112375
  • Wang, L., Pham, N. T., Ng, T. T., Wang, G., Chan, K. L., & Leman, K. (2014, October). Learning deep features for multiple object tracking by using a multi-task learning strategy. In 2014 IEEE International Conference on Image Processing (ICIP) (pp. 838-842). IEEE. https://doi.org/10.1109/icip.2014.7025168
  • Yun, W. J., Park, S., Kim, J., & Mohaisen, D. (2022). Self-Configurable Stabilized Real-Time Detection Learning for Autonomous Driving Applications. IEEE Transactions on Intelligent Transportation Systems. https://doi.org/10.1109/tits.2022.3211326
  • Zhang, D., Maei, H., Wang, X., & Wang, Y. F. (2017). Deep reinforcement learning for visual object tracking in videos. arXiv preprint arXiv:1701.08936. https://doi.org/10.48550/arXiv.1701.08936
  • Zhou, H., Ouyang, W., Cheng, J., Wang, X., & Li, H. (2018). Deep continuous conditional random fields with asymmetric inter-object constraints for online multi-object tracking. IEEE Transactions on Circuits and Systems for Video Technology, 29(4), 1011-1022. https://doi.org/10.1109/tcsvt.2018.2825679
Toplam 20 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Bilgisayar Görüşü, Görüntü İşleme, Örüntü Tanıma, Derin Öğrenme
Bölüm Bilgisayar Mühendisliği
Yazarlar

Tansu Temel 0000-0002-8359-1146

Mehmet Kılıçarslan 0000-0002-7212-5262

Yaşar Hoşcan 0000-0003-0789-6025

Proje Numarası 122E586
Yayımlanma Tarihi 3 Mart 2024
Gönderilme Tarihi 12 Eylül 2023
Yayımlandığı Sayı Yıl 2024

Kaynak Göster

APA Temel, T., Kılıçarslan, M., & Hoşcan, Y. (2024). SDA: A NOVEL SKEWED-DEEP-ARCHITECTURE FOR VEHICLE MOTION DETECTION IN DRIVING VIDEOS. Kahramanmaraş Sütçü İmam Üniversitesi Mühendislik Bilimleri Dergisi, 27(1), 92-104. https://doi.org/10.17780/ksujes.1358512