MAX-POOL AND DROPOUT REGULARIZATION DEEP LEARNING TECHNIQUES TO DETECT TRAFFIC SIGNS
Abstract
Many car drivers are inattentive to traffic signs which result in unfortunate or even dramatic accidents, so in order to prevent such things this article proposes using machine learning technique convolutional neural networks with max-pool and dropout reqularization algorithms. Recently, a dropout regularization technique has seen increasing use in deep learning. For deep convolutional neural networks, dropout is known to work well in fully-connected layers. However, its effect in convolutional and pooling layers is still not clear. This article illustrates in pythonic manner that max-pooling dropout is equivalent to randomly picking activation based on a multinomial distribution at training time. Training set is implemented upon a famous German traffic sign dataset and to see the difference between two regularization methods. Since, dropout regularizer is very efficient in minimizing the overfitting o f the training set by randomly discarding inbound and outbound neurons. Plus, in mix with max-pooling a dropout regularization might require more epochs to converge more accurately. Feeding the algorithm with traffic sign dataset makes it useful for adaptive cruise control systems in cars to avoid nasty and awkward car accidents. Two methods can be used in tandem or separately but in either case performance can be tuned by changing hyperparameters.
References
1. Baldi, P., & Sadowski, P. (2014). The dropout learning algorithm. Artificial Intelligence, 210, 78-122.
2. Bengio, Y., Courville, A., & Vincent, P. (2013). Representation learning: a review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35, 1798-1828.
3. Boureau, Y. L., Ponce J., & LeCun, Y (2010). A theoretical analysis of feature pooling in visual recognition. In Proceedings 27th of International Conference on Machine Learning (ICML 2010).
4. Breiman, L. (1996). Bagging predictors. Machine Learning, 24, 123-140.
5. Ciresan. D., Meier, U., & Schmidhuber, J. (2012). Multi-column deep neural networks for image classification. In Proceedings of 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2012).
6. Goodfellow, I. J., Warde-Farley, D., Mirza, M., Courville, A., & Bengio, Y. (2013). Maxout networks. In Proceedings of 30th International Conference on Machine Learning (ICML 2013).
7. Hinton, G. E., & Salakhutdinov, R. R. (2006). Reducing the dimensionality of data with neural networks. Science, 313, 504-507.
8. Hinton, G. E., Srivastave, N., Krizhevsky, A., Sutskever, I. & Salakhutdinov, R. R. (2012). Improving neural networks by preventing co-adaption of feature detectors. arXiv 1207.0580.
9. Springenberg J. T., & Riedmiller M. (2014). Improving deep neural networks with probabilistic maxout units. In Proceedings of 3rd International Conference on Learning Representations (ICLR 2014).
10. Krizhevsky, A. (2009). Learning multiple layers of features from tiny images. M. S. diss., Department of Computer Science, University of Toronto.
11. Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). ImageNet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems (NIPS 2012).
12. LeCun, Y., Bottou, L., Bengio, Y & Haffner, P. (1998). Gradient-based learning applied to document recognition. In Proceedings of the IEEE.
13. Ledoux, M., & Talagrand, M. (1991). Probability in banach spaces. Springer.
14. Lin, M., Chen, Q., & Yan S. (2014). Network in network. In Proceedings of 3rd International Conference on Learning Representations (ICLR 2014).
15. Mackay, D. C. (1995). Probable networks and plausible predictions: A review of practical Bayesian methods for supervised neural networks. In Bayesian Methods for Backpropagation Networks.
16. Scherer, D., Muller, A., & Behnke, S. (2010). Evaluation of pooling operations in convolutional architectures for object recognition. In Proceedings of 20th International Conference on Artificial Neural Networks (ICANN 2010).
17. Srivastava, N., Hinton. G. E., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. (2014). Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15, 1929-1958.
18. Vinod, N., & Hinton, G. E. (2010). Rectified linear units improve restricted Boltzmann machines. In Proceedings 27th of International Conference on Machine Learning (ICML 2010).
19. Wan, L., Zeiler, M. D., Zhang, S., LeCun, Y., & Fergus, R. (2013). Regularization of neural networks using DropConnect. In Proceedings of 30th International Conference on Machine Learning (ICML 2013).
20. Warde, F. D., Goodfellow, I.J., Courville, A., & Bengio, Y. (2014). An empirical analysis of dropout in piecewise linear networks. In Proceedings of 3rd International Conference on Learning Rep resentations (ICLR 2014).
21. Wager, S., Wang, S., & Liang, P. (2013). Dropout training as adaptive regularization. In Advances in Neural Information Processing Systems (NIPS 2013).
22. Zeiler, M. D., & Fergus R. (2013). Stochastic pooling for regularization of deep convolutional neural networks. In Proceedings of 2nd International Conference on Learning Representations (ICLR 2013).
Review
For citations:
Yerezhepbekov A. MAX-POOL AND DROPOUT REGULARIZATION DEEP LEARNING TECHNIQUES TO DETECT TRAFFIC SIGNS. Herald of the Kazakh-British technical university. 2019;16(3):46-54.