Convergence Analysis of Online Gradient Method for High-Order Neural Networks and Their Sparse Optimization. Academic Article uri icon

abstract

  • In this article, we investigate the boundedness and convergence of the online gradient method with the smoothing group L1/2 regularization for the sigma-pi-sigma neural network (SPSNN). This enhances the sparseness of the network and improves its generalization ability. For the original group L1/2 regularization, the error function is nonconvex and nonsmooth, which can cause oscillation of the error function. To ameliorate this drawback, we propose a simple and effective smoothing technique, which can effectively eliminate the deficiency of the original group L1/2 regularization. The group L1/2 regularization effectively optimizes the network structure from two aspects redundant hidden nodes tending to zero and redundant weights of surviving hidden nodes in the network tending to zero. This article shows the strong and weak convergence results for the proposed method and proves the boundedness of weights. Experiment results clearly demonstrate the capability of the proposed method and the effectiveness of redundancy control. The simulation results are observed to support the theoretical results.

published proceedings

  • IEEE Trans Neural Netw Learn Syst

author list (cited authors)

  • Fan, Q., Kang, Q., Zurada, J. M., Huang, T., & Xu, D.

complete list of authors

  • Fan, Qinwei||Kang, Qian||Zurada, Jacek M||Huang, Tingwen||Xu, Dongpo

publication date

  • October 2023