15th EUROPT Workshop on Advances in Continuous Optimization

Montréal, Canada, 12 — 14 July 2017

15th EUROPT Workshop on Advances in Continuous Optimization

Montréal, Canada, 12 — 14 July 2017

Schedule Authors My Schedule

Variational Analysis and Nonsmooth Optimization III

Jul 12, 2017 03:30 PM – 05:10 PM

Location: Amphithéâtre Banque Nationale

Chaired by Tim Hoheisel

4 Presentations

  • 03:30 PM - 03:55 PM

    Decentralized Stochastic Optimization with Multi-Agent Mirror Descent

    • Michael Rabbat, presenter, Department of Electrical and Computer Engineering, McGill University

    We develop a decentralized algorithm for stochastic composite optimization problems, combining ideas from consensus-based multi-agent optimization and the celebrated mirror descent algorithm. When the composite regularization term is strongly convex, the proposed method is shown to converge at a rate of $\order{1/k}$ where $k$ is the number of iterations executed. This is known to be the best possible rate of convergence for the class of problems considered. Moreover, theory and experiments show that the speedup of the proposed methods---the number of iterations required to reach a desired level of accuracy---scales linearly with the number of nodes in the network.

  • 03:55 PM - 04:20 PM

    Epsilon Subdifferential Computation in Computational Convex Analysis

    • Deepak Kumar, presenter, University of British Columbia
    • Yves Lucet, University of British Columbia
    • Anuj Bajaj, University of British Columbia
    • Warren Hare, UBC

    Computational convex analysis focuses on the computation of convex operators that routinely appear in convex analysis e.g. Legendre-Fenchel conjugate, Moreau envelope, etc. After recalling what computation the (open source) CCA numerical library allows, we will switch the focus from convex transforms like the Legendre-Fenchel transform to convex operators like the subdifferential. We will present results on efficiently computing the epsilon subdifferential of a convex univariate function.

  • 04:20 PM - 04:45 PM

    Recent Developments on Augmented Lagrangian Methods in Finite and Infinite Dimensions

    • Daniel Steck, presenter,
    • Christian Kanzow, JMU Würzburg
    • Daniel Wachsmuth, JMU Würzburg

    The augmented Lagrangian method (ALM) is a classical technique for nonlinear programming which has strong connections to convex analysis. In this talk, we describe some recent developments on ALMs in finite dimensions, and discuss how such methods can be applied to generic (non-convex) optimization problems in Banach and Hilbert spaces. It turns out that the resulting algorithm is fairly general and can deal with many classes of constraint functions. To illustrate this, we give a variety of examples, including standard problems from variational analysis as well as optimal control problems in infinite dimensions.

  • 04:45 PM - 05:10 PM

    A First-Order Method for Nonconvex Optimization without Lipschitz Gradients

    • Damek Davis, presenter, Cornell University
    • Dmitriy Drusvyatskiy, Univerisy of Washington, Seattle

    First-order algorithms for nonconvex optimization problems travel along gradient directions in order to reach stationary points. The standard assumption imposed upon gradients is that they vary in a Lipschitz manner. This assumption leads to global convergence, but restricts the applicability of convergence guarantees to functions that grow at most quadratically. In this talk, we introduce a method which lifts this quadratic growth restriction, but still maintains global convergence guarantees. We furthermore provide sufficient, easy to check conditions which guarantee global convergence of proposed algorithm, as well as a new approach, based on Ekeland's variational principle, for determining how quickly the proposed algorithm converges.

Back