Optimization Days 2024

HEC Montréal, Québec, Canada, 6 — 8 May 2024

Schedule Authors My Schedule

WA9 - Nonlinear optimization

May 8, 2024 10:30 AM – 12:10 PM

Location: Vilnius (green)

Chaired by Nathan Allaire

4 Presentations

  • 10:30 AM - 10:55 AM

    Non-smooth regularized optimization with inexact evaluations

    • Nathan Allaire, presenter, Polytechnique Montréal (GERAD)

    We consider the solution of nonsmooth regularized optimization problems, in which the objective is the sum of a smooth term $f$ and a lower semi-continuous regularizer $h$.
    Both $f$ and $h$ may be nonconvex.
    We devise a variant of the quadratic regularization method R2 of \citet{aravkin-baraldi-orban-2020}, which may be viewed as a proximal-gradient method with adaptive step size, in which $f$, $h$ and the gradient of $f$ may be evaluated inexactly.
    We establish a $O(\epsilon^{-2})$ worst-case evaluation complexity bound to bring a stationarity measure below $\epsilon \in (0, 1)$ under standard assumptions, which matches the best-known complexity for smooth problems and nonsmooth problems with exact evaluations.
    We report numerical results on regularized optimization problems.

  • 10:55 AM - 11:20 AM

    A Proximal Modified Quasi-Newton Method for Nonsmooth Regularized Optimization

    • Youssef Diouane, ISAE-SUPAERO, Université de Toulouse, Toulouse, 31055 Cedex 4, France
    • Mohamed Laghdaf Habiboullah, presenter, GERAD - Polytechnique Montréal
    • Dominique Orban, GERAD - Polytechnique Montréal

    We develop R2N, a modified quasi-Newton method for minimizing the sum of a continuously differentiable function f and lower semi-continuous prox-bounded h subject to bound constraints.
    Both f and h may be nonconvex.
    At each iteration, our method computes a step by minimizing the sum of a convex quadratic model of f, a model of h, and an adaptive quadratic regularization term.
    A step may be computed using the R2 method (Aravkin, Baraldi and Orban, 2020), or the variant R2DH, in which the Hessian model of f is diagonal, when h is separable.
    We establish convergence of a first-order stationarity measure to zero for both bounded and unbounded Hessian approximations.
    Under a Lipschitz assumption, we derive a worst-case evaluation complexity bound that matches the best-known bound for bounded Hessian approximations, and that deteriorates with the growth of the approximations when the latter are unbounded, as in (Leconte and Orban, 2023).
    We describe our implementation in Julia and report numerical experiences on inverse problems.
    Additionally, we discuss our findings on a minimum-rank matrix completion problem.

  • 11:20 AM - 11:45 AM

    A nonsmooth exact penalty method for equality-constrained optimization: complexity and implementation

    • Youssef Diouane, presenter, Polytechnique Montréal
    • Maxence Gollier, Institut Polytechnique de Paris
    • Dominique Orban, GERAD - Polytechnique Montréal

    We describe implementations of the nonsmooth L1- and L2-norm exact penalty methods for equality-constrained smooth optimization based on a nonsmooth optimization approach. Exact penalty methods are typically not implemented because they involve solving nonsmooth problems. We propose implementations based on proximal methods involving proximal operators that depend on a first-order model of the constraints. We establish convergence to a KKT point under a constraint qualification when both the objective and the constraints have Lipschitz-continuous gradient and Jacobian, respectively. Under those standard assumptions, we establish a worst-case complexity bound that matches the best known bounds for equality-constrained smooth optimization. Finally, we compute closed forms solutions for the proximal operators, describe our Julia implementations, and report on numerical experience.

  • 11:45 AM - 12:10 PM

    Warped proximal iterations for solving nonmonotone inclusions and applications

    • Dimitri Papadimitriou, presenter,
    • Dimitri Papadimitriou, ULB

    In a real Hilbert space H, the monotone inclusion problem aims at finding a zero of a set-valued maximally monotone operator A. The term "warped proximal iteration" was recently introduced as generalization of the proximal point algorithm for finding a zero point of a maximally monotone operator A acting on H. Nevertheless, the maximal monotonicity of A restricts its applicability to the class of convex optimization problems as well as operator splitting methods for composite monotone inclusions. The solving of general nonmonotone inclusion, i.e., the inclusion where the operator A is nonmonotone, is an open and challenging research problem. For this purpose, the notion of r-(co)hypomonotonicity has been introduced to guarantee the convergence of the generated sequence. From this perspective, our first objective is to extend the definition of r-hypomotonicity. The second is to investigate the weak convergence property of the warped proximal iteration as well as its various applications to (constrained) nonconvex optimization problems. In particular, we place our attention to the finding of KKT points for a class of nonconvex quadratic programming problems with equality constraints.

Back