An accelerated gradient method for trace norm minimization Conference Paper uri icon

abstract

  • We consider the minimization of a smooth loss function regularized by the trace norm of the matrix variable. Such formulation finds applications in many machine learning tasks including multi-task learning, matrix classification, and matrix completion. The standard semidefinite programming formulation for this problem is computationally expensive. In addition, due to the non-smooth nature of the trace norm, the optimal first-order black-box method for solving such class of problems converges as O( 1/k ), where k is the iteration counter. In this paper, we exploit the special structure of the trace norm, based on which we propose an extended gradient algorithm that converges as O( 1/k ). We further propose an accelerated gradient algorithm, which achieves the optimal convergence rate of O( 1/k2 ) for smooth problems. Experiments on multi-task learning problems demonstrate the eficiency of the proposed algorithms. Copyright 2009.

name of conference

  • Proceedings of the 26th Annual International Conference on Machine Learning

published proceedings

  • Proceedings of the 26th Annual International Conference on Machine Learning

altmetric score

  • 3

author list (cited authors)

  • Ji, S., & Ye, J.

citation count

  • 363

complete list of authors

  • Ji, Shuiwang||Ye, Jieping

publication date

  • June 2009