15th EUROPT Workshop on Advances in Continuous Optimization
Montréal, Canada, July 12 — 14, 2017
15th EUROPT Workshop on Advances in Continuous Optimization
Montréal, Canada, July 12 — 14, 2017
Advances in Largescale Optimization
Jul 12, 2017 01:30 PM – 03:10 PM
Location: PWC
Chaired by Francesco Rinaldi
4 Presentations

01:30 PM  01:55 PM
A twophase projected gradient algorithm for quadratic programming problems with bounds on the variables and a single linear constraint
Moving from the GPCG algorithm for Boundconstrained convex Quadratic Programming (BQP) problems by Moré and Toraldo [SIAM J. Optim., 1, 1991], we propose an algorithmic framework for the solution of BQP problems with a single linear equality constraint , which alternates between two phases: an identification phase, which performs gradient projection steps aimed at identifying a suitable face to be explored, and a minimization phase, which reduces the objective function in such face. In deciding when to terminate the optimization phase we use a criteria of proportionality based on a ratio between a measure of optimality within the reduced space and a measure of the violation of the KKT conditions, which extends the ideas of Dostàl [SIAM J. Optim., 3, 1997] and Friedlander and Martinez [SIAM J. Optim., 4, 1994] for BQP.

01:55 PM  02:20 PM
Let's Make Block Coordinate Descent Go Fast!
Block coordinate descent (BCD) methods are widelyused for largescale numerical optimization because of their cheap iteration costs, low memory requirements, and amenability to parallelization. Three main algorithmic choices influence the performance of BCD methods; the block partitioning strategy, the block selection rule, and the block update rule. We explore these three building blocks and propose variations for each that can lead to significantly faster BCD methods. We support all of our findings with numerical results for the classic machine learning problems of logistic regression, multiclass logistic regression, support vector machines, and label propagation.

02:20 PM  02:45 PM
New activeset FrankWolfe variants for minimization over the simplex and the l1ball
An activeset algorithmic framework is proposed for minimizing a function over the simplex. It combines an activeset estimate (to identify the zerovariables at the stationary point) with FrankWolfetype directions. Two steps are performed at each iteration: first, a decrease in the objective function is obtained by setting the active variables to zero and updating one nonactive variable; then, a search direction is computed in the subspace of the nonactive variables. Global convergence is proved and a linear convergence rate is guaranteed under additional assumptions. Then, the algorithm is extended to solve minimization problems over the l1ball. Numerical results are provided.

02:45 PM  03:10 PM
QuasiNewton based preconditioning techniques for Nonlinear Conjugate Gradient Methods.
In this work we study new preconditioning strategies for Nonlinear Conjugate Gradient (NCG) methods, in large scale unconstrained optimization. We propose some novel preconditioners constructed by collecting information from the NCG iterations, in order to generate an approximate inverse of the Hessian matrix. To this aim, we convey information from quasiNewton updates. In particular, at any iteration of the NCG we consider preconditioners based on new lowrank quasiNewton symmetric updating formulae, obtained as byproduct of the NCG iterations. An extensive numerical experience is provided on a large selection of CUTEst test problems.