On Distributed Non-convex Optimization: Projected Subgradient Method For Weakly Convex Problems in Networks Academic Article uri icon

abstract

  • The stochastic subgradient method is a widely-used algorithm for solving large-scale optimization problems arising in machine learning. Often these problems are neither smooth nor convex. Recently, Davis et al. [1-2] characterized the convergence of the stochastic subgradient method for the weakly convex case, which encompasses many important applications (e.g., robust phase retrieval, blind deconvolution, biconvex compressive sensing, and dictionary learning). In practice, distributed implementations of the projected stochastic subgradient method (stoDPSM) are used to speed-up risk minimization. In this paper, we propose a distributed implementation of the stochastic subgradient method with a theoretical guarantee. Specifically, we show the global convergence of stoDPSM using the Moreau envelope stationarity measure. Furthermore, under a so-called sharpness condition, we show that deterministic DPSM (with a proper initialization) converges linearly to the sharp minima, using geometrically diminishing step-size. We provide numerical experiments to support our theoretical analysis.

altmetric score

  • 1.25

author list (cited authors)

  • Chen, S., Garcia, A., & Shahrampour, S.

citation count

  • 0

complete list of authors

  • Chen, Shixiang||Garcia, Alfredo||Shahrampour, Shahin

publication date

  • April 2020