WA5 - Derivative-free and blackbox optimization 2
May 13 2026 09:00 – 10:40
Location: Lise Birikundavyi - Lionel Rey (blue)
Chaired by Xavier Lebeuf
4 Presentations
A Riemannian Framework for Feasible Derivative-free Optimization with Equality Constraints
Solving derivative-free optimization problems with equality constraints is especially challenging when nonfeasible iterates may cause objective function evaluations to fail. When these constraints are known beforehand, knowledge of the geometry of the feasible set can help guide the optimization in a parametrization-like approach. In the case when this set is a Riemannian manifold, subproblems can be formulated in its tangent spaces, and the iterates brought back to the manifold by a so-called retraction. This approach implicitly enforces dimensionality reduction while yielding only feasible iterates. While this fact is known from the literature, the main contribution of this work is the introduction of a formal Riemannian framework compatible with any derivative-free solver, that automates computations of tangent spaces and solving the subproblems. When used with a Mesh Adaptive Direct Search solver, we prove that this framework yields theoretical guarantees in terms of Clarke's generalized derivatives in the tangent spaces.
Adaptive direct search algorithms with relaxable and quantifiable constraints
This talk introduces ADS-PB, an extension of the Adaptive Direct Search (ADS) framework to constrained blackbox optimization problems with quantifiable and relaxable constraints. The problems considered involve minimizing an objective function subject to inequality constraints, where both the objective and constraint functions are available only through blackbox evaluations. In contrast with the extreme barrier approach used in ADS, the proposed framework incorporates a progressive barrier mechanism, allowing infeasible points to be evaluated and exploited through a measure of constraint violation. ADS-PB is thus suited to problems where constraint violations can be quantified and progressively reduced. The resulting algorithm does not rely on mesh structures or sufficient decrease conditions, and a convergence analysis is provided under standard assumptions of derivative-free optimization. Its performance is assessed on constrained test problems and compared with existing methods. The integration within the NOMAD software is also discussed.
AI‑Driven Decision support tools for OPTIMAL Maintenance SCHEDULING in industrial processes
Industrial systems are increasingly complex and interconnected, such that local equipment degradation can propagate through the plant, leading to increased energy consumption, unplanned downtime, and production losses. This paper presents an AI driven decision support tool for proactive maintenance scheduling, based on a hybrid Machine Learning–Operational Research (ML–OR) approach. The proposed framework integrates condition based monitoring with prognosis to predict future asset performance indicators, including efficiency loss, energy penalties, and remaining useful life. A key contribution is the explicit modeling of asset performance interdependencies, enabling the quantification of global system impacts resulting from local asset degradation.
To address the computational challenges posed by non-linear, non-differentiable model behavior, machine-learning surrogate models are developed to approximate the energy and operational cost functions. These surrogates are embedded within derivative free optimization techniques, allowing efficient exploration of complex, combinatorial maintenance scheduling spaces under realistic industrial constraints such as limited maintenance resources, production interruptions, and economic trade offs. Maintenance actions—including cleaning—are represented as distinct decision variables and coordinated across assets over a planning horizon.
The approach is demonstrated on a representative heat recovery network, where heat exchanger fouling drives increased steam consumption. Results show that the proposed ML–OR framework outperforms rule based and condition based strategies, achieving measurable reductions in total energy and maintenance costs.
Two-Stage Blackbox Optimization with Optimal Transport-Based Sub-Sampling
This work emerges from an asset management problem at Hydro-Québec, in which a stochastic blackbox optimization (BBO) problem has prohibitive function evaluation costs. A structural property of the blackbox problem is exploited to decompose function evaluations into a fast-to-evaluate stochastic blackbox (stage 1) and an expensive deterministic blackbox (stage 2). We propose an optimal transport-based sub-sampling strategy that leverages this decomposition to obtain accurate function approximations with few calls to the stage 2 blackbox. This strategy can be integrated into various BBO solvers for stochastic problems.
To test this method, we introduce MicroPRIAD, a new stochastic benchmark problem that replicates the properties of Hydro-Québec's asset management problem. Experimental results show that the proposed sub-sampling technique significantly improves optimization efficiency.
