15th EUROPT Workshop on Advances in Continuous Optimization

Montréal, Canada, 12 — 14 juillet 2017

15th EUROPT Workshop on Advances in Continuous Optimization

Montréal, Canada, 12 — 14 juillet 2017

Horaire Auteurs Mon horaire

Augmented Lagrangian and Proximal Point Methods

12 juil. 2017 11h30 – 12h45

Salle: TD Assurance Meloche Monnex

Présidée par Paulo Oliveira

3 présentations

  • 11h30 - 11h55

    A scaling proximal point algorithm in domains of positivity

    • Paulo Oliveira, prés., Federal University of Rio de Janeiro
    • Ronaldo Gregorio, Federal Rural University of Rio de Janeiro
    • Charlan Alves, Federal University of Rio de Janeiro

    This paper presents a proximal point method for convex minimization problems in a special class of Hadamard manifold called homogeneous domains of positivity. The method is related to the particularization of the Rham decomposition theorem for symmetric spaces in certain sense and it computes the minimum for Riemannian convex functions. Homogeneous domains of positivity to be considered in this work are those of reducible type. Some computational experiments with Riemannian averages of symmetric positive definite matrices, as DTI problems, show that the method is potentially useful in actual problems.

  • 11h55 - 12h20

    Extragradient method in optimization: Convergence and complexity

    • phong nguyen, prés., mr

    We consider the extragradient method to minimize the sum of two functions, the first one being smooth and the second being convex. Under Kurdyka-Lojasiewicz assumption, we prove that the sequence produced by the extragradient method converges to a critical point of the problem and has finite length. The analysis is extended to the case when both functions are convex. We provide a 1/k convergence rate which is classical for gradient methods. Furthermore, we show that the recent small-prox complexity result can be applied to this method. Considering the extragradient method is the occasion to describe exact line search for proximal decomposition methods. We provide details for the implementation of this scheme for the ell_1 regularized least squares problem and give numerical results which suggest that combining nonaccelerated methods with exact line search can be a competitive choice.

  • 12h20 - 12h45

    An augmented Lagrangian method for equality constrained optimization with rapid infeasibility detection capabilities

    • Paul Armand, Université de Limoges
    • Ngoc Nguyen Tran, prés., Université de Limoges

    We present a primal-dual augmented Lagrangian method for solving an equality constrained minimization problem, which is able to rapidly detect infeasibility. A parameter is introduced to scale the objective function and, in case of infeasibility, to force the convergence of the iterates to an infeasible stationary point. It is shown, under mild assumptions, that whenever the algorithm converges to an infeasible stationary point, the rate of convergence is quadratic. The numerical experiments show that our new approach is as good as the original one when the algorithm converges to a local minimum, but much more efficient in case of infeasibility.

Retour