Journées de l'optimisation 2022
HEC Montréal, Québec, Canada, 16 — 18 mai 2022
MA8  Derivativefree and Blackbox Optimization I
16 mai 2022 10h30 – 12h10
Salle: METRO INC. (jaune)
Présidée par Delphine Sinoquet
3 présentations

10h30  10h55
Reducing dimension in Bayesian Optimization
A determining factor to the utility of optimization algorithms is their cost. A strategy to contain this cost is to reduce the dimension of the search space by detecting the most important variables and optimizing over them only. Recently, sensitivity measures that rely on the Hilbert Schmidt Independence criterion (HSIC) adapted to optimization variables have been proposed. In this work, the HSIC sensitivities are used within a new Bayesian global optimization algorithm in order to reduce the dimension of the problem. At each iteration, the activation of optimization variables is challenged in a deterministic or probabilistic manner. Several strategies for filling in the variables that are dropped out are proposed. Numerical tests are carried out at low number of function evaluations that confirm the computational gains brought by the HSIC variable selection and point to the complementarity of the variable selection and fillin strategies.

10h55  11h20
Learning hidden constraints using a stepwise uncertainty reduction strategy with Gaussian Process Classifiers
Design optimization of engineering systems generally involves the use of complex numerical models that take as input design variables and environmental variables. The environmental conditions are generally simulated in order to assess the reliability of the proposed designs. Hence the simulations can become computationally very expensive. Moreover, some input conditions can lead to simulation failures or instabilities, due, for instance, to instabilities in the numerical scheme of complex partial derivative equations. Most of the time, the set of inputs corresponding to failures is not known a priori and corresponds to an hidden constraint, also called crash constraint in this special case. Since the observation of a simulation failure might be as costly as a feasible simulation, we seek to learn the feasible set of inputs and thus target areas without simulation failure during optimization process. Therefore, we propose a Gaussian Process Classifiers (GPC) active learning method to learn the feasible domain. The proposed methodology is an adaptation of Stepwise Uncertainty Reduction strategies, usually used in inversion with Gaussian Process Regression, in the classification setting with GPC models. The performance of this strategy on different hidden constraint problems will be presented.

11h20  11h45
Escaping local minima with derivative free trust region method for mixed continuous and discrete problems
This work is motivated by optimization applications based on complex and expensive blackbox simulators. Our goal is therefore to obtain the best improvement in function minimization with the least number of simulations. We propose a new approach to escape local minima with the local DerivativeFree Optimization Trustregion method for mixed continuous and discrete problems (DFOb). The latter method is based on two main steps: successive continuous quadratic subproblems and mixed binary quadratic subproblems which are both based on interpolation models defined for mixed variables, valid in an adaptive trust region. In order to force exploration for binary variables, "nogood cut" constraints are added to force the algorithm to explore outside the previously explored regions. A restart procedure is integrated in the DFOb method: it resets the size of the trust region and allows an enrichment of the simulation dataset for exploration whenever the algorithm is no longer making sufficient progress. Specifically, our strategy for choosing these new simulations relies on a novel design of experiment method that uses kernelembedding of probability distributions adapted to mixed variables, allowing to take into account any available prior information on the type of problem we are dealing with.