15th EUROPT Workshop on Advances in Continuous Optimization
Montréal, Canada, 12 — 14 juillet 2017
15th EUROPT Workshop on Advances in Continuous Optimization
Montréal, Canada, 12 — 14 juillet 2017
Advances in Largescale Optimization
12 juil. 2017 13h30 – 15h10
Salle: PWC
Présidée par Francesco Rinaldi
4 présentations

13h30  13h55
A twophase projected gradient algorithm for quadratic programming problems with bounds on the variables and a single linear constraint
Moving from the GPCG algorithm for Boundconstrained convex Quadratic Programming (BQP) problems by Moré and Toraldo [SIAM J. Optim., 1, 1991], we propose an algorithmic framework for the solution of BQP problems with a single linear equality constraint , which alternates between two phases: an identification phase, which performs gradient projection steps aimed at identifying a suitable face to be explored, and a minimization phase, which reduces the objective function in such face. In deciding when to terminate the optimization phase we use a criteria of proportionality based on a ratio between a measure of optimality within the reduced space and a measure of the violation of the KKT conditions, which extends the ideas of Dostàl [SIAM J. Optim., 3, 1997] and Friedlander and Martinez [SIAM J. Optim., 4, 1994] for BQP.

13h55  14h20
Let's Make Block Coordinate Descent Go Fast!
Block coordinate descent (BCD) methods are widelyused for largescale numerical optimization because of their cheap iteration costs, low memory requirements, and amenability to parallelization. Three main algorithmic choices influence the performance of BCD methods; the block partitioning strategy, the block selection rule, and the block update rule. We explore these three building blocks and propose variations for each that can lead to significantly faster BCD methods. We support all of our findings with numerical results for the classic machine learning problems of logistic regression, multiclass logistic regression, support vector machines, and label propagation.

14h20  14h45
New activeset FrankWolfe variants for minimization over the simplex and the l1ball
An activeset algorithmic framework is proposed for minimizing a function over the simplex. It combines an activeset estimate (to identify the zerovariables at the stationary point) with FrankWolfetype directions. Two steps are performed at each iteration: first, a decrease in the objective function is obtained by setting the active variables to zero and updating one nonactive variable; then, a search direction is computed in the subspace of the nonactive variables. Global convergence is proved and a linear convergence rate is guaranteed under additional assumptions. Then, the algorithm is extended to solve minimization problems over the l1ball. Numerical results are provided.

14h45  15h10
QuasiNewton based preconditioning techniques for Nonlinear Conjugate Gradient Methods.
In this work we study new preconditioning strategies for Nonlinear Conjugate Gradient (NCG) methods, in large scale unconstrained optimization. We propose some novel preconditioners constructed by collecting information from the NCG iterations, in order to generate an approximate inverse of the Hessian matrix. To this aim, we convey information from quasiNewton updates. In particular, at any iteration of the NCG we consider preconditioners based on new lowrank quasiNewton symmetric updating formulae, obtained as byproduct of the NCG iterations. An extensive numerical experience is provided on a large selection of CUTEst test problems.