15th EUROPT Workshop on Advances in Continuous Optimization

Montréal, Canada, 12 — 14 juillet 2017

15th EUROPT Workshop on Advances in Continuous Optimization

Montréal, Canada, 12 — 14 juillet 2017

Horaire Auteurs Mon horaire

Variational Analysis and Nonsmooth Optimization III

12 juil. 2017 15h30 – 17h10

Salle: Amphithéâtre Banque Nationale

Présidée par Tim Hoheisel

4 présentations

  • 15h30 - 15h55

    Decentralized Stochastic Optimization with Multi-Agent Mirror Descent

    • Michael Rabbat, prés., Department of Electrical and Computer Engineering, McGill University

    We develop a decentralized algorithm for stochastic composite optimization problems, combining ideas from consensus-based multi-agent optimization and the celebrated mirror descent algorithm. When the composite regularization term is strongly convex, the proposed method is shown to converge at a rate of $\order{1/k}$ where $k$ is the number of iterations executed. This is known to be the best possible rate of convergence for the class of problems considered. Moreover, theory and experiments show that the speedup of the proposed methods---the number of iterations required to reach a desired level of accuracy---scales linearly with the number of nodes in the network.

  • 15h55 - 16h20

    Epsilon Subdifferential Computation in Computational Convex Analysis

    • Deepak Kumar, prés., University of British Columbia
    • Yves Lucet, University of British Columbia
    • Anuj Bajaj, University of British Columbia
    • Warren Hare, UBC

    Computational convex analysis focuses on the computation of convex operators that routinely appear in convex analysis e.g. Legendre-Fenchel conjugate, Moreau envelope, etc. After recalling what computation the (open source) CCA numerical library allows, we will switch the focus from convex transforms like the Legendre-Fenchel transform to convex operators like the subdifferential. We will present results on efficiently computing the epsilon subdifferential of a convex univariate function.

  • 16h20 - 16h45

    Recent Developments on Augmented Lagrangian Methods in Finite and Infinite Dimensions

    • Daniel Steck, prés.,
    • Christian Kanzow, JMU Würzburg
    • Daniel Wachsmuth, JMU Würzburg

    The augmented Lagrangian method (ALM) is a classical technique for nonlinear programming which has strong connections to convex analysis. In this talk, we describe some recent developments on ALMs in finite dimensions, and discuss how such methods can be applied to generic (non-convex) optimization problems in Banach and Hilbert spaces. It turns out that the resulting algorithm is fairly general and can deal with many classes of constraint functions. To illustrate this, we give a variety of examples, including standard problems from variational analysis as well as optimal control problems in infinite dimensions.

  • 16h45 - 17h10

    A First-Order Method for Nonconvex Optimization without Lipschitz Gradients

    • Damek Davis, prés., Cornell University
    • Dmitriy Drusvyatskiy, Univerisy of Washington, Seattle

    First-order algorithms for nonconvex optimization problems travel along gradient directions in order to reach stationary points. The standard assumption imposed upon gradients is that they vary in a Lipschitz manner. This assumption leads to global convergence, but restricts the applicability of convergence guarantees to functions that grow at most quadratically. In this talk, we introduce a method which lifts this quadratic growth restriction, but still maintains global convergence guarantees. We furthermore provide sufficient, easy to check conditions which guarantee global convergence of proposed algorithm, as well as a new approach, based on Ekeland's variational principle, for determining how quickly the proposed algorithm converges.

Retour