Volume 9, Issue 3, November 2014, Pages 1056–1063
Khidir Shaib Mohamed1 and Yousif Shoaib Mohammed2
1 School of Mathematical Sciences, Dalian University of Technology, Dalian 116024, PR China
2 Department of Physics, College of Science & Art, Qassim University, Oklat Al- Skoor, P.O.Box: 111, Saudi Arabia
Original language: English
Copyright © 2014 ISSR Journals. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
In this paper, we study the convergence of offline gradient method with smoothing L_(1/2) regularization penalty for training multi-output feed forward neural networks. The monotonicity of the error function and weight boundedness for the offline gradient with smoothing L_(1/2) regularization. the usual L_(1/2) regularization term involves absolute value and is not differentiable at the origin. The key point of this paper is modify the usual L_(1/2) regularization term by smoothing it at the origin are presented, the convergence results are proved, which will be very meaningful for theoretical research or applications on multi
Author Keywords: feed forward neural network, offline gradient method, smoothing L_(1/2) regularization, boundedness, convergence.
Khidir Shaib Mohamed1 and Yousif Shoaib Mohammed2
1 School of Mathematical Sciences, Dalian University of Technology, Dalian 116024, PR China
2 Department of Physics, College of Science & Art, Qassim University, Oklat Al- Skoor, P.O.Box: 111, Saudi Arabia
Original language: English
Copyright © 2014 ISSR Journals. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract
In this paper, we study the convergence of offline gradient method with smoothing L_(1/2) regularization penalty for training multi-output feed forward neural networks. The monotonicity of the error function and weight boundedness for the offline gradient with smoothing L_(1/2) regularization. the usual L_(1/2) regularization term involves absolute value and is not differentiable at the origin. The key point of this paper is modify the usual L_(1/2) regularization term by smoothing it at the origin are presented, the convergence results are proved, which will be very meaningful for theoretical research or applications on multi
Author Keywords: feed forward neural network, offline gradient method, smoothing L_(1/2) regularization, boundedness, convergence.
How to Cite this Article
Khidir Shaib Mohamed and Yousif Shoaib Mohammed, “Convergence of Offline Gradient Method with Smoothing L1/2 Regularization for Two-layer of Neural Network,” International Journal of Innovation and Applied Studies, vol. 9, no. 3, pp. 1056–1063, November 2014.