Journées de l'optimisation 2018

HEC Montréal, Québec, Canada, 7 — 9 mai 2018

Horaire Auteurs Mon horaire

TA4 Blackbox and derivative-free optimization II

8 mai 2018 10h30 – 12h10

Salle: Hélène Desmarais (48)

Présidée par Charles Audet

4 présentations

  • 10h30 - 10h55

    Opportunism and ordering strategies in derivative-free optimization

    • Loïc Anthony Sarrazin-Mc Cann, prés., Polytechnique Montréal
    • Charles Audet, GERAD - Polytechnique Montréal
    • Sébastien Le Digabel, GERAD, Polytechnique Montréal
    • Christophe Tribes, Polytechnique Montréal

    We consider the opportunistic strategy present in some direct-search methods for derivative-free optimization, and more specifically blackbox optimization. This strategy interrupts the evaluations following the discovery of a successful point. And this without interfering with the convergence analysis of the method. This opportunistic strategy is computationally tested on a range of algorithms (including CS, GPS, GSS, MADS, IMFIL), with different strategies to order the candidates.

  • 10h55 - 11h20

    Different ordering strategies in blackbox optimisation with no models

    • Charles Audet, GERAD - Polytechnique Montréal
    • Gilles Caporossi, GERAD, HEC Montréal
    • Stéphane Jacquet, prés.,

    In blackbox optimisation algorithms, the opportunistic strategy has been used to preserve the budget of evaluations. With no models given, an algorithm like MADS orders its elements according to the direction of last success. Two other strategies will be considered here. The first will use regression methods from supervised classification to evaluate first the elements who have the most chances to be feasible. The second one will try to explore more unknown regions and evaluate first the elements the furthest from the elements already evaluated.

  • 11h20 - 11h45

    The complex barycenter method for direct optimization

    • Felipe Pait, prés., USP

    A randomized version of the recently developed barycenter method for derivative-free optimization has desirable properties of a gradient search. We develop a complex version to avoid evaluations at high-gradient points. The method is parallelizable in a natural way and robust under noisy measurements and has applications to control design.

  • 11h45 - 12h10

    Mesh-based Nelder-Mead algorithm for inequality constrained optimization

    • Charles Audet, prés., GERAD - Polytechnique Montréal
    • Christophe Tribes, Polytechnique Montréal

    Despite the lack of theoretical and practical convergence support, the Nelder-Mead (NM) algorithm is widely used to solve unconstrained optimization problems. It is a derivative-free algorithm, that attempts iteratively to replace the worst point of a simplex by a better one. The present paper proposes a search step of the Mesh Adaptive Direct Search (MADS) algorithm for inequality constrained optimization, inspired by the NM algorithm.
    The proposed algorithm does not suffer from the NM lack of convergence, but instead inherits from the totality of the MADS convergence analysis. Numerical experiments show an important improvement in the quality of the solutions produced using this search step.

Retour