Journées de l'optimisation 2022

HEC Montréal, Québec, Canada, 16 — 18 mai 2022

Horaire Auteurs Mon horaire

TB5 - Numerical optimization and linear algebra with Julia I

17 mai 2022 15h30 – 17h10

Salle: Louis-Laberge (rouge)

Présidée par Dominique Monnet

4 présentations

  • 15h30 - 15h55

    Improved univariate differentiable optimization algorithms and line search computation

    • Jean-Pierre Dussault, prés., Université de Sherbrooke
    • Samuel Goyette,

    We revisit Newton/secant algorithms and accelerations for scalar optimization and two globalization techniques: safeguards and (1D)trust regions. By explicitly using the objective function, we are able to provide faster converging iterations than classic root finding (of the derivative) counterparts of the algorithms. Accelerations and globalizations combine into several one-dimensional solvers that we compare using an efficiency index. We then adapt the solvers to compute (strong) Wolfe step-sizes. Julia code using the JSO eco-system and extensive numerical experimentation is reported.

  • 15h55 - 16h20

    The impact of numerical errors on Quasi-Newtonian methods

    • Tania Belabbas, prés.,
    • Jean-Pierre Dussault, Université de Sherbrooke

    We revisit the implementation of quasi-Newtonian methods, in particular the BFGS formula. To mitigate the impact of updating the inverse of an ill-conditioned matrix, Gill & Murray suggested to update a Choleski decomposition of the matrix. In a high-dimensional context, the concept of limited memory updating was introduced, culminating in the L-BFGS algorithm (Liu & Nocedal). Another limited memory implementation uses the so-called "compact matrix" form (Byrd et al). Notice that, these limited memory formulations may be used with an "infinite" memory.

    In this presentation, we recall the formulae and the computational complexity of each of these four updates and propose an empirical study of the impact of conditioning on these different formulae. Recall that these four formulae are all mathematically equivalent when the available memory is unlimited.

    We propose two sets of tests for our empirical comparison: a generator of quadratic functions whose conditioning is specified, and a sample of nonlinear problems from the CUTEst collection. We will see that none of the four variants significantly dominates the others with respect to robustness to numerical errors. Indeed, we will provide numerous examples displaying the impact of ill-conditioning on each variant of the quasi-Newtonian update.

  • 16h20 - 16h45

    Exploiting the partially separable structure in quasi-Newton optimization

    • Paul Raynaud, prés., GERAD and Polytechnique Montréal
    • Dominique Orban, GERAD - Polytechnique Montréal
    • Jean Bigeon, Univ. Grenoble Alpes, CNRS, Grenoble INP, G-SCOP, F-38000 Grenoble, France

    In this talk, we revisit how partially separable structure can be exploited to improve quasi-Newton methods in large-scale continuous optimization. The partially-separable structure as a sum of element functions can be automatically deduced from the expression graph of the objective function. Partitioned quasi-Newton methods approximate the Hessian of individual element functions, and preserve the assembled Hessian sparsity. Our method is matrix- and factorization free, and allows to combine several element functions together. By updating several element functions at each iterate, we obtain a finer approximation than with unstructured quasi-Newton updates. Our numerical results illustrate the fast convergence induced by high rank updates.

  • 16h45 - 17h10

    Multi-precision quadratic regularization algorithm with inexact evaluations

    • Dominique Monnet, prés., GERAD and Polytechnique Montréal
    • Dominique Orban, GERAD - Polytechnique Montréal
    • Farhad Rabarnia, GERAD-Polytechnique Montréal

    This talk presents a multi-precision quadratic regularization algorithm named MR2 and study its convergence properties in finite-precision arithmetic. MR2 is a steepest descent algorithm and allows inexact evaluations of the objective and its gradient. In the situation where computations can be performed on several levels of precision (e.g. half, single, double, or higher), MR2 chooses a suitable machine precision level at each step in order to limit computational effort and energy consumption while ensuring convergence. We prove the convergence of the algorithm under inexact evaluations, and propose an implementation of MR2 in Julia programming language. Julia enables an accurate implementation since dedicated libraries allow to perform guaranteed evaluations of the objective function and the gradient with interval arithmetic, with arbitrary precision levels. The behaviour of MR2 is illustrated over numerical examples.

Retour