15th EUROPT Workshop on Advances in Continuous Optimization
Montréal, Canada, 12 — 14 juillet 2017
15th EUROPT Workshop on Advances in Continuous Optimization
Montréal, Canada, 12 — 14 juillet 2017
Derivative Free Optimization 2
13 juil. 2017 13h30 – 15h10
Salle: Nancy et MichelGaucher
Présidée par Warren Hare
4 présentations

13h30  13h55
MultiGLODS: Global and Local Multiobjective Optimization using Direct Search
The optimization of multimodal functions is a challenging task, in particular when derivatives are not available for use. Recently, in a directional direct search framework, a clever multistart strategy was proposed for global derivativefree optimization of single objective functions. The goal of the current work is to generalize this approach to the computation of global Pareto fronts for multiobjective multimodal derivativefree optimization problems. The proposed algorithm alternates between initializing new searches, using a multistart strategy, and exploring promising subregions, resorting to directional direct search. Components of the objective function are not aggregated and new points are accepted using the concept of Pareto dominance. The initialized searches are not all conducted until the end, merging when start to be close to each other. We will describe the algorithmic structure considered, present the main associated theoretical results, and report related numerical experience that evidences the quality of the final solutions generated by the new algorithm and its capability in identifying approximations to global and local Pareto fronts of a given problem.

13h55  14h20
Adaptive Interpolation Strategies in DerivativeFree Optimization: a case study
DerivativeFree optimization (DFO) focuses on designing methods to solve optimization
problems without the analytical knowledge of gradients of the objective function.
There are two main families of DFO methods: modelbased methods and direct
search methods. In modelbased DFO methods, a model of the objective function is
constructed using only objective function values, and the model is used to guide the
computation of the next iterate. Natural questions in this class of algorithms include
how many function evaluations should be used to construct the model? And, should
this number be xed, or adaptively selected by the algorithm? In this paper, we numerically
examine these questions, using Hare and Lucet's DerivativeFree Proximal Point
(DFPP) algorithm [14] as a case study. Results suggest that the number of function
evaluations used to construct the model has a huge impact on algorithm performance,
and adaptive strategies can both improve and hinder algorithm performance. 
14h20  14h45
Handling infeasibility in blackbox optimization using supervised classification.
Blackbox optimization problems, where the objective function and the constraints have unknown analytic expressions, lead to multiple difficulties such as no access to the gradient and long CPU time. Moreover, since the functions can sometimes be given by simulations or experiments, some of the computations can crash and give unreliable results. The MADS algorithm deals with constrained blackbox optimization problems. Since its introduction in 2006, it has known severals improvements to manage constraints. However, binary constraints are currently managed the same way as the other constraints. Considering the lack of information given by binary constraints, they would benefit from a specific treatment.
That presentation proposes a way to manage binary constraints using tools from supervised classification. Our work includes the case with a single constraint, which will be binary, since it offers a way to manage the case when simulations or experiments crash. 
14h45  15h10
Compositions of Convex Functions and Fully Linear Models
Derivativefree optimization (DFO) is the mathematical study of the optimization algorithms that do not use derivatives. One branch of DFO focuses on modelbased DFO methods, where an approximation of the objective function is used to guide the optimization algorithm. Proving convergence of such methods often applies an assumption that the approximations form fully linear models  an assumption that requires the true objective function to be smooth. However, some recent methods have loosened this assumption and instead worked with functions that are compositions of smooth functions with simple convex functions (the maxfunction or the $\ell_1$ norm). In this paper, we examine the error bounds resulting from the composition of a convex lower semicontinuous function with a smooth vectorvalued function when it is possible to provide fully linear models for each component of the vectorvalued function. We derive error bounds for the resulting function values and subgradient vectors.