18th INFORMS Computing Society (ICS) Conference
Toronto, Canada, 14 — 16 mars 2025
18th INFORMS Computing Society (ICS) Conference
Toronto, Canada, 14 — 16 mars 2025

Inverse Optimization: Methods and Applications II
16 mars 2025 16h45 – 18h15
Salle: Music Room
Présidée par Archis Ghate
4 présentations
-
16h45 - 17h07
Convex Inverse Approximate Mixed-Integer Linear Programming
The goal in inverse mixed-integer linear programming (I-MILP) is to infer the objective of a forward mixed-integer linear program (MILP) from one of its optimal solutions and an initial estimate of the objective. The standard modeling framework for I-MILP poses a few limitations: data presumably used to construct the initial estimate of the objective is not exploited later downstream, the given solution to the forward MILP is often not optimal in practice, and I-MILPs are computationally challenging. We propose a new, convex inverse approximate (CIA)-MILP and develop a precise understanding of the potential advantages of CIA-MILP over I-MILP. By studying the geometry and solutions structure of both problems and properties of their estimates, we show that CIA-MILPs can be formulated as convex programs and develop an algorithm for solving large-scale instances of the problem.
-
17h07 - 17h29
Integer Optimization Through Transformers: A Case Study on Knapsack Problems with Unknown Reward Functions
We present an end-to-end framework for generating solutions to integer optimization problems with unknown components using transformer-based generative neural networks. Our framework learns directly from past solutions and incorporates the known components, such as hard constraints, via a constraint reasoning module, yielding a constrained learning scheme. The trained model generates new solutions that are similar to past solutions and are guaranteed to respect the characteristics of the known components. As a case study, we apply our approach to knapsack problems with unknown reward functions. Our experimental evaluation shows that transformer models have remarkably strong performance and often produce near-optimal solutions in a fraction of a second. They can be particularly effective in the presence of more complex underlying objective functions compared to an LSTM-based alternative and inverse optimization.
-
17h29 - 17h51
Expert-Guided Inverse Optimization for Convex Constraint Inference
Conventional inverse optimization inputs a solution and finds the parameters of an optimization model that render a given solution optimal. The literature mostly focuses on inferring the objective function in linear problems when accepted solutions are provided as input. We propose an inverse optimization model that inputs several accepted and rejected solutions and recovers the underlying convex optimization model that can be used to generate such solutions. The novelty of our model is two-fold: First, we focus on inferring the parameters of the underlying convex feasible region. Second, the proposed model learns the convex constraint set from a set of past observations that are either accepted or rejected by an expert. The resulting inverse model is a mixed-integer nonlinear problem that is complex to solve. To mitigate the inverse problem complexity, we employ variational inequalities and the theoretical properties of the solutions to derive a reduced formulation that retains the complexity of its forward counterpart. Using realistic breast cancer patient data, we demonstrate that our inverse model can utilize a subset of past accepted and rejected treatment plans to infer clinical criteria that can lead to nearly guaranteed acceptable treatment plans for future patients.
-
17h51 - 18h13
Inverse optimization in countable-state Markov decision processes
Inverse optimization involves finding values of problem parameters that would render given values of decision variables optimal. This is in contrast with the usual forward optimization where decision variables are computed using given values of parameters. In Markov decision processes (MDPs), literature has focused on imputing two types of inputs that would make a given policy optimal: (i) cost parameters or (ii) transition probabilities. All existing work only considers the case of finite-state MDPs. We will present an extension to the case of countable-state MDPs. We will rely on conditions under which the inverse formulations can be written as mathematical programs with countably infinite numbers of variables and constraints. We will present computational methods to solve these mathematical programs.