RESIDUAL NETWORK LAYER COMPARISON FOR SEAT BELT DETECTION

  • Irma Amelia Dewi(1*)
    Institut Teknologi Nasional Bandung
  • Nur Zam Zam Nasrulloh(2)
    Institut Teknologi Nasional Bandung
  • (*) Corresponding Author
Keywords: Sabuk Pengaman, ResNet, CNN, Deteksi Obyek

Abstract

Most of the monitoring of traffic violations on Indonesian roads is currently done manually by monitoring through CCTV cameras, so drivers still have the possibility of violating the use of seat belts. Residual Network (ResNet) as one of the architectures with an accuracy rate of up to 96.4% in 2015, which is intended to overcome the vanishing gradient problem that commonly occurs in networks with many layers. Therefore, in this study, a system was developed using the RetinaNet architecture to detect drivers who use seat belts and drivers who do not use seat belts with the ResNet backbone. In addition, this study compares the performance of ResNet-101 and ResNet-152. The hyperparameters used include a dataset of 10,623 images in the training process, and the batch size parameter is 1, with a total of 10,623 steps, and the number of epochs is 16. Based on 60 tests conducted in this study, the RetinaNet model with the ResNet-152 architecture performed better than the ResNet-101 architecture. The ResNet-152 architecture resulted in a system performance with an accuracy of 98%, precision value of 99%, recall value of 99%, and an f1 score of 99%.

Downloads

Download data is not yet available.

References

A. Kashevnik, A. Ali, I. Lashkov and N. Shilov, “Seat Belt Fastness Detection Based on Image Analysis from Vehicle In-Cabin Camera,” in 2020 26th Conference of Open Innovations Association (FRUCT), Yaroslavl, Russia, 2020, doi: 10.23919/FRUCT48808.2020.9087474

E. Snyder, “Seat Belt Statistic,” A Personal Injury Law Firm Representing Injured People, 2019. [Online]. Available: https://www.edgarsnyder.com/car-accident/defective-products/seat-belts/seat-belts-statistics.html. [Accessed 21 October 2020].

I. H. Sarker, “Deep Learning: A Comprehensive Overview on Techniques, Taxonomy, Applications and Research Directions,” SN Computer Science, vol. 2, no. 420, pp. 1-20, 2021, doi: 10.1007/s42979-021-00815-1.

Z.Q. Zhao, Z.Q. Zhao, P. Zheng, S.-T. Xu and X. Wu, “Object Detection With Deep Learning: A Review,” IEEE Transactions on Neural Networks and Learning Systems, vol. 30, no. 11, pp. 3212 - 3232, 2019, doi: 10.1109/TNNLS.2018.2876865.

W. Chen, Y. Liu, W. Wang, E. M. Bakker, T. Georgiou, P. Fieguth, L. Liu and M. S. Lew, “Deep Learning for Instance Retrieval: A Survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 1-20, 2022, doi: 10.1109/TPAMI.2022.3218591.

T.-Y. Lin, P. Goyal, R. Girshick, K. He and P. Doll´ar, “Focal Loss for Dense Object Detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. Volume: 42, no. Issue: 2, pp. 318 - 327, 2020.

T. M. Hoang, P. H. Nguyen, N. Q. Truong, Y. W. Lee and K. R. Park, “Deep RetinaNet-Based Detection and Classification of Road Markings by Visible Light Camera Sensors,” Sensors, vol. 19, no. 2, pp. 1-25, 2019, doi: 10.3390/s19020281.

Á. Arcos-García, J. A. Alvarez-Garcia and L. M. S. Morillo, “Evaluation of Deep Neural Networks for traffic sign detection systems,” Neurocomputing, vol. 316, pp. 332-344, 2018, doi: 10.1016/j.neucom.2018.08.009.

X. Ding, Z. Lin, F. He, Y. Wang and Y. Huang, “A Deeply Recursive Convolutional Network For Crowd Counting,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, 2018, doi: 10.1109/ICASSP.2018.8461772.

N. A. Rahmad, N. A. J. Sufri, N. H. Muzamil and M. A. As’ari, “Badminton player detection using faster region convolutional neural network,” Indonesian Journal of Electrical Engineering and Computer Science, vol. 14, no. 3, p. 1330 – 1335, 2019, doi: 10.11591/ijeecs.v14.i3.pp1330-1335.

T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan and S. Belongie, “Feature Pyramid Networks for Object Detection,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, 2017.

H. Jung, B. Kim, I. Lee, M. Yoo, J. Lee, S. Ham, O. Woo and J. Kang, “Detection of masses in mammograms using a one-stage object detector based on a deep convolutional neural network,” Journal Plos One, pp. 1-16, 2018, doi: 10.1371/journal.pone.0203355.

K. He , X. Zhang, . S. Ren and J. Sun, “Deep Residual Learning for Image Recognition,” arXiv, pp. 1-12, 2015.

T. Kong, F. Sun, H. Liu, Y. Jiang, L. Li and J. Shi, “FoveaBox: Beyound Anchor-Based Object Detection,” IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 29, pp. 7389-7398, 2020, doi: 10.1109/TIP.2020.3002345.

MathWorks, “Anchor Boxes for Object Detection,” 2020. [Online]. Available: https://www.mathworks.com/help/vision/ug/anchor-boxes-for-object-detection.html.

I. A. Dewi, L. Kristiana, A. R. Darlis and R. F. Dwiputra, “Deep Learning RetinaNet based Car Detection for Smart Transportation Network,” ELKOMIKA: Jurnal Teknik Energi Elektrik, Teknik Telekomunikasi, & Teknik Elektronika, vol. 7, no. 3, p. 570 – 584, 2019, doi: 10.26760/elkomika.v7i3.570.

M. Hashemi, “Enlarging smaller images before inputting into convolutional neural network: zero padding vs. interpolation,” Journal of Big Data, pp. 1-13, 2019, doi: 10.1186/s40537-019-0263-7.

J. Liang, “Image classification based on RESNET,” in The 2020 3rd International Conference on Computer Information Science and Application Technology (CISAT), Dali, China, 2020, doi: 10.1088/1742-6596/1634/1/012110.

M. A. Hossain and M. S. A. Sajib, “Classification of Image using Convolutional Neural Network (CNN),” Global Journal of Computer Science and Technology Neural & Artificial Intelligence, vol. 19, no. 2, pp. 12-18, 2019, doi: 10.34257/GJCSTDVOL19IS2PG13.

R. Munir, “Konvolusi dan Transformasi Fourier,” in Pengolahan Citra Digital, Bandung, Informatika, 2004 , pp. 61-73.

T. Karlita, I. M. G. Sunarya, J. Priambodo, R. Rokhana, E. M. Yuniarno, I. K. E. Purnama and M. H. Purnomo, “Deteksi Region of Interest Tulang pada Citra B-mode secara Otomatis Menggunakan Region Proposal Networks,” JNTETI (Jurnal Nasional Teknik Elektro dan Teknologi Informasi), pp. 68-76, 2019. [Online]. Available: https://journal.ugm.ac.id/v3/JNTETI/article/view/2618. [Accessed 21 October 2020].

F. H. K. Zaman, J. Johari and A. I. M. Yassin, “Learning Face Similarities for Face Verification using Hybrid Convolutional Neural Networks,” Indonesian Journal of Electrical Engineering and Computer Science, vol. 16, no. 3, pp. 1333 - 1342, 2019, doi: 10.11591/ijeecs.v16.i3.pp1333-1342.

PlumX Metrics

Published
2023-07-16
How to Cite
[1]
I. Dewi and N. Nasrulloh, “RESIDUAL NETWORK LAYER COMPARISON FOR SEAT BELT DETECTION”, jicon, vol. 11, no. 2, pp. 145-156, Jul. 2023.
Section
Articles

Most read articles by the same author(s)

Obs.: This plugin requires at least one statistics/report plugin to be enabled. If your statistics plugins provide more than one metric then please also select a main metric on the admin's site settings page and/or on the journal manager's settings pages.