Research Article
BibTex RIS Cite

Accuracy Comparison of CNN Networks on GTSRB Dataset

Year 2022, Volume: 2 Issue: 2, 63 - 68, 26.12.2022

Abstract

In this era, interpreting and processing the data of traffic signs has crucial importance for improving autonomous car technology. In this respect, the relationship between the recognition of traffic signs and industrial applications is highly relevant. Although real-world systems have reached that related market and several academic studies on this topic have been published, regular objective comparisons of different algorithmic approaches are missing due to the lack of freely available benchmark datasets. From this point of view, we compare the AlexNET, DarkNET-53, and EfficientNET-b0 convolutional neural network (CNN) algorithms according to validation performance on the German Traffic Signs Recognition Benchmark (GTSRB) dataset. Considering the equal training and test conditions 70% of data as training, 15% of data as training validation, and 15% of data were chosen as test data. Experimental results show us that EfficientNET-b0 architecture has 98.64%, AlexNET architecture has 97.45% and DarkNet-53 architecture has 94.69% accuracy performance.

References

  • [1] J. Zhang, W. Wang, C. Lu, J. Wang, and A. K. Sangaiah, “Lightweight deep network for traffic sign classification,” Annales des Telecommunications/Annals of Telecommunications, vol. 75, no. 7–8, 2020.
  • [2] J. Stallkamp, M. Schlipsing, J. Salmen, and C. Igel, “The German Traffic Sign Recognition Benchmark: A multi-class classification competition,” 2011.
  • [3] G. Wang, G. Ren, Z. Wu, Y. Zhao, and L. Jiang, “A hierarchical method for traffic sign classification with support vector machines,” 2013.
  • [4] Y. LeCun, G. Hinton, and Y. Bengio, “Deep learning (2015), Y. LeCun, Y. Bengio and G. Hinton,” Nature, vol. 521, 2015.
  • [5] D. P. Kingma, D. J. Rezende, S. Mohamed, and M. Welling, “Semi-supervised learning with deep generative models,” in Advances in Neural Information Processing Systems, 2014, vol. 4, no. January.
  • [6] J. Zhang, C. Lu, X. Li, H. J. Kim, and J. Wang, “A full convolutional network based on DenseNet for remote sensing scene classification,” Mathematical Biosciences and Engineering, vol. 16, no. 5, 2019.
  • [7] G. Pang, C. Shen, L. Cao, and A. van den Hengel, “Deep Learning for Anomaly Detection: A Review,” ACM Computing Surveys, vol. 54, no. 2. 2021.
  • [8] L. Liu et al., “Deep Learning for Generic Object Detection: A Survey,” International Journal of Computer Vision, vol. 128, no. 2, 2020.
  • [9] Z. Bi, L. Yu, H. Gao, P. Zhou, and H. Yao, “Improved VGG model-based efficient traffic sign recognition for safe driving in 5G scenarios,” International Journal of Machine Learning and Cybernetics, vol. 12, no. 11, 2021.
  • [10] F. Zaklouta and B. Stanciulescu, “Real-time traffic sign recognition in three stages,” Robotics and Autonomous Systems, vol. 62, no. 1, 2014.
  • [11] Y. Chen, W. Xu, J. Zuo, and K. Yang, “The fire recognition algorithm using dynamic feature fusion and IV-SVM classifier,” Cluster Computing, vol. 22, 2019.
  • [12] Y. Chen, J. Xiong, W. Xu, and J. Zuo, “A novel online incremental and decremental learning algorithm based on variable support vector machine,” Cluster Computing, vol. 22, 2019.
  • [13] G. Wang, G. Ren, Z. Wu, Y. Zhao, and L. Jiang, “A robust, coarse-to-fine traffic sign detection method,” 2013.
  • [14] Á. Arcos-García, J. A. Álvarez-García, and L. M. Soria-Morillo, “Deep neural network for traffic sign recognition systems: An analysis of spatial transformers and stochastic optimisation methods,” Neural Networks, vol. 99, 2018.
  • [15] D. Cireşan, U. Meier, J. Masci, and J. Schmidhuber, “Multi-column deep neural network for traffic sign classification,” Neural Networks, vol. 32, 2012.
  • [16] B. Gecer, G. Azzopardi, and N. Petkov, “Color-blob-based COSFIRE filters for object recognition,” Image and Vision Computing, vol. 57, 2017.
  • [17] J. Stallkamp, M. Schlipsing, J. Salmen, and C. Igel, “Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition,” Neural Networks, vol. 32, 2012.
  • [18] Ozkaya, U., Melgani, F., Bejiga, M. B., Seyfi, L., and Donelli, M. “GPR B scan image analysis with deep learning methods,”. Measurement, 165, 107770, 2020.
  • [19] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Commun ACM, vol. 60, no. 6, 2017.
  • [20] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2016, vol. 2016-Decem.
  • [21] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in 32nd International Conference on Machine Learning, ICML 2015, 2015, vol. 1.
  • [22] D. A. Clevert, T. Unterthiner, and S. Hochreiter, “Fast and accurate deep network learning by exponential linear units (ELUs),” 2016.
  • [23] H. Ma, Y. Liu, Y. Ren, and J. Yu, “Detection of collapsed buildings in post-earthquake remote sensing images based on the improved YOLOv3,” Remote Sensing, vol. 12, no. 1, 2020.
  • [24] M. Tan and Q. v. Le, “EfficientNet: Rethinking model scaling for convolutional neural networks,” in 36th International Conference on Machine Learning, ICML 2019, 2019, vol. 2019-June.
  • [25] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L. C. Chen, “MobileNetV2: Inverted Residuals and Linear Bottlenecks,” 2018.
Year 2022, Volume: 2 Issue: 2, 63 - 68, 26.12.2022

Abstract

References

  • [1] J. Zhang, W. Wang, C. Lu, J. Wang, and A. K. Sangaiah, “Lightweight deep network for traffic sign classification,” Annales des Telecommunications/Annals of Telecommunications, vol. 75, no. 7–8, 2020.
  • [2] J. Stallkamp, M. Schlipsing, J. Salmen, and C. Igel, “The German Traffic Sign Recognition Benchmark: A multi-class classification competition,” 2011.
  • [3] G. Wang, G. Ren, Z. Wu, Y. Zhao, and L. Jiang, “A hierarchical method for traffic sign classification with support vector machines,” 2013.
  • [4] Y. LeCun, G. Hinton, and Y. Bengio, “Deep learning (2015), Y. LeCun, Y. Bengio and G. Hinton,” Nature, vol. 521, 2015.
  • [5] D. P. Kingma, D. J. Rezende, S. Mohamed, and M. Welling, “Semi-supervised learning with deep generative models,” in Advances in Neural Information Processing Systems, 2014, vol. 4, no. January.
  • [6] J. Zhang, C. Lu, X. Li, H. J. Kim, and J. Wang, “A full convolutional network based on DenseNet for remote sensing scene classification,” Mathematical Biosciences and Engineering, vol. 16, no. 5, 2019.
  • [7] G. Pang, C. Shen, L. Cao, and A. van den Hengel, “Deep Learning for Anomaly Detection: A Review,” ACM Computing Surveys, vol. 54, no. 2. 2021.
  • [8] L. Liu et al., “Deep Learning for Generic Object Detection: A Survey,” International Journal of Computer Vision, vol. 128, no. 2, 2020.
  • [9] Z. Bi, L. Yu, H. Gao, P. Zhou, and H. Yao, “Improved VGG model-based efficient traffic sign recognition for safe driving in 5G scenarios,” International Journal of Machine Learning and Cybernetics, vol. 12, no. 11, 2021.
  • [10] F. Zaklouta and B. Stanciulescu, “Real-time traffic sign recognition in three stages,” Robotics and Autonomous Systems, vol. 62, no. 1, 2014.
  • [11] Y. Chen, W. Xu, J. Zuo, and K. Yang, “The fire recognition algorithm using dynamic feature fusion and IV-SVM classifier,” Cluster Computing, vol. 22, 2019.
  • [12] Y. Chen, J. Xiong, W. Xu, and J. Zuo, “A novel online incremental and decremental learning algorithm based on variable support vector machine,” Cluster Computing, vol. 22, 2019.
  • [13] G. Wang, G. Ren, Z. Wu, Y. Zhao, and L. Jiang, “A robust, coarse-to-fine traffic sign detection method,” 2013.
  • [14] Á. Arcos-García, J. A. Álvarez-García, and L. M. Soria-Morillo, “Deep neural network for traffic sign recognition systems: An analysis of spatial transformers and stochastic optimisation methods,” Neural Networks, vol. 99, 2018.
  • [15] D. Cireşan, U. Meier, J. Masci, and J. Schmidhuber, “Multi-column deep neural network for traffic sign classification,” Neural Networks, vol. 32, 2012.
  • [16] B. Gecer, G. Azzopardi, and N. Petkov, “Color-blob-based COSFIRE filters for object recognition,” Image and Vision Computing, vol. 57, 2017.
  • [17] J. Stallkamp, M. Schlipsing, J. Salmen, and C. Igel, “Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition,” Neural Networks, vol. 32, 2012.
  • [18] Ozkaya, U., Melgani, F., Bejiga, M. B., Seyfi, L., and Donelli, M. “GPR B scan image analysis with deep learning methods,”. Measurement, 165, 107770, 2020.
  • [19] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Commun ACM, vol. 60, no. 6, 2017.
  • [20] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2016, vol. 2016-Decem.
  • [21] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in 32nd International Conference on Machine Learning, ICML 2015, 2015, vol. 1.
  • [22] D. A. Clevert, T. Unterthiner, and S. Hochreiter, “Fast and accurate deep network learning by exponential linear units (ELUs),” 2016.
  • [23] H. Ma, Y. Liu, Y. Ren, and J. Yu, “Detection of collapsed buildings in post-earthquake remote sensing images based on the improved YOLOv3,” Remote Sensing, vol. 12, no. 1, 2020.
  • [24] M. Tan and Q. v. Le, “EfficientNet: Rethinking model scaling for convolutional neural networks,” in 36th International Conference on Machine Learning, ICML 2019, 2019, vol. 2019-June.
  • [25] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L. C. Chen, “MobileNetV2: Inverted Residuals and Linear Bottlenecks,” 2018.
There are 25 citations in total.

Details

Primary Language English
Subjects Artificial Intelligence
Journal Section Research Articles
Authors

Gökberk Ay 0000-0003-1392-1653

Akif Durdu 0000-0002-5611-2322

Barış Samim Nesimioğlu 0000-0002-7725-3060

Publication Date December 26, 2022
Submission Date July 17, 2022
Published in Issue Year 2022 Volume: 2 Issue: 2

Cite

IEEE G. Ay, A. Durdu, and B. S. Nesimioğlu, “Accuracy Comparison of CNN Networks on GTSRB Dataset”, Journal of Artificial Intelligence and Data Science, vol. 2, no. 2, pp. 63–68, 2022.

All articles published by JAIDA are licensed under a Creative Commons Attribution 4.0 International License.

88x31.png