Some New Complexity Results for Composite Optimization

Speaker: Guanghui (George) Lan, Georgia Institute of Technology

Abstract: Composite optimization, with objective function given by the summation of several smooth and nonsmooth components, is prevalent in machine learning models and applications. Theoretical studies on the solution efficiency of these types of problems, however, are still limited. In this talk, we introduce a new set of optimization algorithms that can greatly advance our understanding about the complexity of composite optimization. More specifically, we discuss: 1) gradient sliding algorithms for nonsmooth composite optimization that can skip gradient computations for the smooth component from time to time and thus achieve separate and optimal complexity bounds for both gradient and subgradient evaluations; and 2) accelerated gradient sliding methods for smooth composite and saddle-point optimization which also lead to significant savings on gradient computations without slowing down the overall optimal rate of convergence. Numerical results will be presented to showcase the effectiveness of these gradient sliding methods.