Convolutional Neural Network (CNN) Layer Development for Effectiveness of Classification Tasks

Authors

  • Rafayel M. Veziryan Institute for Informatics and Automation Problems of NAS RA
  • Rafayel N. Khachatryan Questrade Armenia Inc.

DOI:

https://doi.org/10.51408/1963-0110

Keywords:

DCNN, Classification task, Image processing, ResNet, PDE, Cable equation, Grid method

Abstract

This paper presents a novel 2D convolutional layer motivated by the principles of Partial Differential Equation (PDE) of Neural Interaction. Our objective is to leverage this layer to enhance the classification accuracy of Deep Convolutional Neural Networks (DCNN) for various classification tasks. We place a particular emphasis on its integration within the ResNet architecture, and we conduct experimental evaluations on the CIFAR10 and STL10 datasets to validate its efficacy.

References

I. H. Sarker, “Deep Learning: A comprehensive overview on techniques, taxonomy, applications and tesearch directions”, SN Computer Science, vol. 2, no. 6, 420, 2021.

K. He, X. Zhang, S. Ren and J. Sun, “Deep residual learning for image recognition”, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, pp. 770-778, 2016.

K. Mikula, “Image processing with partial differential equations”, Modern Methods in Scientific Computing and Applications. NATO Science Series, vol. 75, pp. 283-321, 2002.

F. Guichard, L. Moisan and J.-M. Morel, “A review of P.D.E.models in image processing and image analysis”, Journal dePhysique IV (Proceedings), vol. 12, no. 1, pp. 137-154, 2002.

L. Ruthotto and E. Haber, “Deep Neural Networks Motivated by Partial Differential Equations”, Journal of Mathematical Imaging and Vision, vol. 62, no. 3, pp. 352-364, 2020.

P. C. Bressloff, Waves in Neural Media, Part of the book series: Lecture Notes on Mathematical Modelling in the Life Sciences, New York, USA, Springer, 2014.

(2023) CIFAR-10 Dataset. [Online] Available: http://www.cs.toronto.edu/~kriz/cifar.html

(2023) STL-10 Dataset. [Online] Available: http://ai.stanford.edu/~acoates/stl10/

S. Ioffe and C. Szegedy, “Batch Normalization: Accelerating deep network training by reducing internal covariate shift”, Proceedings of the 32nd International Conference on Machine Learning, Lille, France, pp. 448-456, 2015.

S. Ruder, “An overview of gradient descent optimization algorithms”, arXiv:1609.04747, 2020.

A. Al-Kababji, F. Bensaali and S. P. Dakua, “Scheduling techniques for liver segmentation: ReduceLRonPlateau vs OneCycleLR”, The 2nd International Conference on Intelligent Systems and Patterns Recognition, Hammamet, Tunisia, pp. 204-212, 2022.

Downloads

Published

2023-11-30

How to Cite

Veziryan, R. M., & Khachatryan, R. N. (2023). Convolutional Neural Network (CNN) Layer Development for Effectiveness of Classification Tasks. Mathematical Problems of Computer Science, 60, 63–71. https://doi.org/10.51408/1963-0110