Optimization Days 2026

HEC Montréal, Québec, Canada

May 11 — 13, 2026

TB5 - Derivative-free and blackbox optimization 1

May 12 2026 15:30 – 17:10

Location: Lise Birikundavyi - Lionel Rey (blue)

Chaired by Valentin Dijon

4 Presentations

15:30 - 15:55

Benchmarking algorithms on large test sets

  • Charles Audet, speaker, Polytechnique Montréal
  • Warren Hare, University of British Columbia

Benchmarking is essential for assessing the effectiveness of optimization algorithms. This is especially true in derivative-free optimization, where target problems are often complex simulations that require extensive time to evaluate. This limits the number of evaluations that can be performed, making it critical to have a good understanding of the potential quality of various algorithms. This talk reviews standard benchmarking methods, including convergence plots, performance profiles, data profiles, and accuracy profiles, widely used to evaluate optimization algorithms. The primary contribution is a formal extension of these benchmarking techniques to three specific contexts: constrained optimization, multi-objective optimization, and surrogate-based optimization. Benchmarking codes are proposed and made available to the community.

15:55 - 16:20

A Scalable Trust-Region Framework for High-Dimensional Bayesian Optimisation

  • Meisseme Kadri, speaker, Polytechnique Montreal
  • Youssef Diouane, Polytechnique Montréal
  • Amina Lamghari, Université du Québec à Trois-Rivières
  • Issmaïl El Hallaoui, Polytechnique Montréal, GERAD

Bayesian optimisation (BO) is known to be very effective for solving low-dimensional expensive black-box optimisation (BBO) problems; however, in high dimensions, it often scales poorly and becomes highly sensitive to the acquisition function. To address these issues, we introduce STREGO, a scalable trust-region BO method that combines global exploration with local exploitation refinements. STREGO adapts an existing trust-region BO framework (known as TREGO) to efficiently solve high-dimensional BBO problems. Similar to TREGO, the STREGO framework is organised into two phases: a global phase and a local phase. While the TREGO global phase performs mono-objective BO, STREGO solves a mean–variance surrogate-based bi-objective problem and clusters the resulting non-dominated set to select a diverse batch of candidate points. If this global batch fails to improve the objective function, STREGO switches to a local phase, where it fits and minimises a surrogate model using only evaluations within a trust-region. Experiments on scalable analytical BBO benchmarks, as well as on classification tasks with up to 300 dimensions, indicate that STREGO outperforms not only TREGO but also state-of-the-art high-dimensional BO methods.

16:20 - 16:45

Maritime Search and Rescue Operations Planning via Model-and-Solve

  • Amirhossein Esmaeilpour, speaker, Ph.D. Candidate
  • Michael Morin, Université Laval
  • Irène Abi-Zeid, Université Laval

Optimization-based decision support systems, such as SAR Optimizer, can assist search mission coordinators in planning maritime search operations. Computing the probability of finding the search object requires a search simulation. We present a model-and-solve approach for optimizing maritime search plans that integrates simulation. 

16:45 - 17:10

Benchmarking Techniques for Bilevel Derivative-free Optimization

  • Valentin Dijon, speaker, GERAD - Polytechnique Montréal
  • Youssef Diouane, Polytechnique Montréal
  • Charles Audet, GERAD - Polytechnique Montréal

Bilevel derivative-free optimization (BL-DFO) concerns problems with a main blackbox optimization problem, named upper-level, is constrained by a nested optimization problem, called lower-level problem. Optimal solutions of BL-DFO problems have to satisfy inequality constraints and lower-level optimality to be considered, i.e. being admissible. Nevertheless, the lower-level optimality condition is often relaxed yielding non-admissible points and thus mislead to unfair comparisons. For this reason, comparing BL-DFO algorithms may be delicate and requires current benchmarking techniques to be generalized.

The proposed benchmarking technique for bilevel problems rely on a referee-based procedure that challenges the admissibility of points generated by a BL-DFO algorithm. This post-processing scheme aims for revoking proven non-admissible points resulting in filtered histories that can be used for algorithm comparisons. Additionally, several works consider either upper-level or lower-level function calls as numerical cost metrics, while both should be aggregated to represent properly the general effort deployed to solve a blevel problem. A scaled evaluation counter is then introduced to account for the possible numerical cost between the leader and the follower and reflect more properly the general cost of a BL-DFO algorithm.

Generalized benchmarking strategies are tested through data profiles on BOLIB problems using the open source Julia programming language. The results illustrate the impacts of both the refereeing procedure and the scaled evaluation counter on algorithmic performances.