15th EUROPT Workshop on Advances in Continuous Optimization

Montréal, Canada, 12 — 14 juillet 2017

15th EUROPT Workshop on Advances in Continuous Optimization

Montréal, Canada, 12 — 14 juillet 2017

Horaire Auteurs Mon horaire

Derivative Free Optimization 2

13 juil. 2017 13h30 – 15h10

Salle: Nancy et Michel-Gaucher

Présidée par Warren Hare

4 présentations

  • 13h30 - 13h55

    MultiGLODS: Global and Local Multiobjective Optimization using Direct Search

    • Ana Luisa Custodio, prés., Universidade Nova de Lisboa
    • Jose Aguilar Madeira, Universidade de Lisboa and ADM-ISEL

    The optimization of multimodal functions is a challenging task, in particular when derivatives are not available for use. Recently, in a directional direct search framework, a clever multistart strategy was proposed for global derivative-free optimization of single objective functions. The goal of the current work is to generalize this approach to the computation of global Pareto fronts for multiobjective multimodal derivative-free optimization problems. The proposed algorithm alternates between initializing new searches, using a multistart strategy, and exploring promising subregions, resorting to directional direct search. Components of the objective function are not aggregated and new points are accepted using the concept of Pareto dominance. The initialized searches are not all conducted until the end, merging when start to be close to each other. We will describe the algorithmic structure considered, present the main associated theoretical results, and report related numerical experience that evidences the quality of the final solutions generated by the new algorithm and its capability in identifying approximations to global and local Pareto fronts of a given problem.

  • 13h55 - 14h20

    Adaptive Interpolation Strategies in Derivative-Free Optimization: a case study

    • Majid Jaberipour, prés., UBC Okanagan
    • Warren Hare, UBC

    Derivative-Free optimization (DFO) focuses on designing methods to solve optimization
    problems without the analytical knowledge of gradients of the objective function.
    There are two main families of DFO methods: model-based methods and direct
    search methods. In model-based DFO methods, a model of the objective function is
    constructed using only objective function values, and the model is used to guide the
    computation of the next iterate. Natural questions in this class of algorithms include
    how many function evaluations should be used to construct the model? And, should
    this number be xed, or adaptively selected by the algorithm? In this paper, we numerically
    examine these questions, using Hare and Lucet's Derivative-Free Proximal Point
    (DFPP) algorithm [14] as a case study. Results suggest that the number of function
    evaluations used to construct the model has a huge impact on algorithm performance,
    and adaptive strategies can both improve and hinder algorithm performance.

  • 14h20 - 14h45

    Handling infeasibility in blackbox optimization using supervised classification.

    • Stéphane Jacquet, prés.,

    Blackbox optimization problems, where the objective function and the constraints have unknown analytic expressions, lead to multiple difficulties such as no access to the gradient and long CPU time. Moreover, since the functions can sometimes be given by simulations or experiments, some of the computations can crash and give unreliable results. The MADS algorithm deals with constrained blackbox optimization problems. Since its introduction in 2006, it has known severals improvements to manage constraints. However, binary constraints are currently managed the same way as the other constraints. Considering the lack of information given by binary constraints, they would benefit from a specific treatment.
    That presentation proposes a way to manage binary constraints using tools from supervised classification. Our work includes the case with a single constraint, which will be binary, since it offers a way to manage the case when simulations or experiments crash.

  • 14h45 - 15h10

    Compositions of Convex Functions and Fully Linear Models

    • Warren Hare, prés., UBC

    Derivative-free optimization (DFO) is the mathematical study of the optimization algorithms that do not use derivatives. One branch of DFO focuses on model-based DFO methods, where an approximation of the objective function is used to guide the optimization algorithm. Proving convergence of such methods often applies an assumption that the approximations form fully linear models -- an assumption that requires the true objective function to be smooth. However, some recent methods have loosened this assumption and instead worked with functions that are compositions of smooth functions with simple convex functions (the max-function or the $\ell_1$ norm). In this paper, we examine the error bounds resulting from the composition of a convex lower semi-continuous function with a smooth vector-valued function when it is possible to provide fully linear models for each component of the vector-valued function. We derive error bounds for the resulting function values and subgradient vectors.

Retour