In this paper, we study the convergence of offline gradient method with smoothing L_(1/2) regularization penalty for training multi-output feed forward neural networks. The monotonicity of the error function and weight boundedness for the offline gradient with smoothing L_(1/2) regularization. the usual L_(1/2) regularization term involves absolute value and is not differentiable at the origin. The key point of this paper is modify the usual L_(1/2) regularization term by smoothing it at the origin are presented, the convergence results are proved, which will be very meaningful for theoretical research or applications on multi