15th EUROPT Workshop on Advances in Continuous Optimization

Montréal, Canada, 12 — 14 juillet 2017

15th EUROPT Workshop on Advances in Continuous Optimization

Montréal, Canada, 12 — 14 juillet 2017

Horaire Auteurs Mon horaire
Cal add eabad1550a3cf3ed9646c36511a21a854fcb401e3247c61aefa77286b00fe402

Advances in Large-scale Optimization

12 juil. 2017 13h30 – 15h10

Salle: PWC

Présidée par Francesco Rinaldi

4 présentations

  • Cal add eabad1550a3cf3ed9646c36511a21a854fcb401e3247c61aefa77286b00fe402
    13h30 - 13h55

    A two-phase projected gradient algorithm for quadratic programming problems with bounds on the variables and a single linear constraint

    • Daniela di Serafino, Dept. of Mathematics and Physics University of Campania "Luigi Vanvitelli", Italy
    • Gerardo Toraldo, prés., Dept. of Mathematics and Applications, University of Naples Federico II, Italy
    • Marco Viola, University of Rome La Sapienza, Italy

    Moving from the GPCG algorithm for Bound-constrained convex Quadratic Programming (BQP) problems by Moré and Toraldo [SIAM J. Optim., 1, 1991], we propose an algorithmic framework for the solution of BQP problems with a single linear equality constraint , which alternates between two phases: an identification phase, which performs gradient projection steps aimed at identifying a suitable face to be explored, and a minimization phase, which reduces the objective function in such face. In deciding when to terminate the optimization phase we use a criteria of proportionality based on a ratio between a measure of optimality within the reduced space and a measure of the violation of the KKT conditions, which extends the ideas of Dostàl [SIAM J. Optim., 3, 1997] and Friedlander and Martinez [SIAM J. Optim., 4, 1994] for BQP.

  • Cal add eabad1550a3cf3ed9646c36511a21a854fcb401e3247c61aefa77286b00fe402
    13h55 - 14h20

    Let's Make Block Coordinate Descent Go Fast!

    • Julie Nutini, prés., University of British Columbia
    • Issam Laradji, University of British Columbia
    • Mark Schmidt, University of British Columbia

    Block coordinate descent (BCD) methods are widely-used for large-scale numerical optimization because of their cheap iteration costs, low memory requirements, and amenability to parallelization. Three main algorithmic choices influence the performance of BCD methods; the block partitioning strategy, the block selection rule, and the block update rule. We explore these three building blocks and propose variations for each that can lead to significantly faster BCD methods. We support all of our findings with numerical results for the classic machine learning problems of logistic regression, multi-class logistic regression, support vector machines, and label propagation.

  • Cal add eabad1550a3cf3ed9646c36511a21a854fcb401e3247c61aefa77286b00fe402
    14h20 - 14h45

    New active-set Frank-Wolfe variants for minimization over the simplex and the l1-ball

    • Andrea Cristofari, prés., Sapienza University of Rome
    • Marianna De Santis, University of Padova
    • Stefano Lucidi, Sapienza University of Rome
    • Francesco Rinaldi, University of Padova

    An active-set algorithmic framework is proposed for minimizing a function over the simplex. It combines an active-set estimate (to identify the zero-variables at the stationary point) with Frank-Wolfe-type directions. Two steps are performed at each iteration: first, a decrease in the objective function is obtained by setting the active variables to zero and updating one non-active variable; then, a search direction is computed in the subspace of the non-active variables. Global convergence is proved and a linear convergence rate is guaranteed under additional assumptions. Then, the algorithm is extended to solve minimization problems over the l1-ball. Numerical results are provided.

  • Cal add eabad1550a3cf3ed9646c36511a21a854fcb401e3247c61aefa77286b00fe402
    14h45 - 15h10

    Quasi-Newton based preconditioning techniques for Nonlinear Conjugate Gradient Methods.

    • Andrea Caliciotti, prés., University of Rome "La Sapienza"
    • Giovanni Fasano, University of Venice "Ca' Foscari"
    • Massimo Roma, University of Rome "La Sapienza"

    In this work we study new preconditioning strategies for Nonlinear Conjugate Gradient (NCG) methods, in large scale unconstrained optimization. We propose some novel preconditioners constructed by collecting information from the NCG iterations, in order to generate an approximate inverse of the Hessian matrix. To this aim, we convey information from quasi-Newton updates. In particular, at any iteration of the NCG we consider preconditioners based on new low-rank quasi-Newton symmetric updating formulae, obtained as by-product of the NCG iterations. An extensive numerical experience is provided on a large selection of CUTEst test problems.