18th INFORMS Computing Society (ICS) Conference
Toronto, Canada, 14 — 16 mars 2025
18th INFORMS Computing Society (ICS) Conference
Toronto, Canada, 14 — 16 mars 2025

Machine Learning Meets Discrete Optimization
14 mars 2025 14h45 – 16h15
Salle: East Common
Présidée par Erfan Rafieikia
4 présentations
-
14h45 - 15h07
Neural Heuristics for Mathematical Optimization via Value Function Approximation
Mathematical optimization is an invaluable framework for modeling and solving decision-making problems with many successes in single-level deterministic problems (e.g., mixed-integer linear or nonlinear optimization). However, many real-world problems require accounting for uncertainty or the reaction of another agent. Paradigms such as stochastic programming, bilevel optimization, and robust optimization can model these situations but are much slower to solve than their deterministic counterparts, especially when discrete decisions must be made. This work demonstrates how a single framework based on value function approximation and optimizing over trained neural networks can be adapted to all three domains. Empirically, we find solutions of similar, and in some cases significantly better, quality than state-of-the-art algorithms in each field, often within a fraction of the running time. The datasets and three frameworks, Neur2SP (NeurIPS'22), Neur2RO (ICLR'24), and Neur2BiLO (NeurIPS'24), are open-sourced for further research.
-
15h07 - 15h29
Learning to Optimize for Mixed-Integer Non-linear Programming
Mixed-integer nonlinear programs (MINLPs) are essential in fields like energy and transportation but are notoriously difficult to solve. Recent advances in learning to optimize have led to remarkable successes, which include solution prediction with continuous decision variables, thereby avoiding the need for computationally expensive optimization algorithms. However, applying learning to MINLPs remains challenging primarily due to the presence of integer decision variables, which complicate gradient-based learning. To overcome this, we introduce differentiable corrections that enforce integrality while preserving gradients, complemented by a soft penalty for constraint violations. This framework addresses both integrality and non-linearity, enabling efficient learning-based solutions for parametric MINLPs. Experiments demonstrate that our approach outperforms traditional solvers and heuristics, especially as problem sizes grow, delivering high-quality solutions in milliseconds. By extending learning-to-optimize to MINLPs, we unlock new potential for integrating integer constraints into deep learning. Our code is available at https://github.com/pnnl/L2O-pMINLP.
-
15h29 - 15h51
Optimization Over Trained Neural Networks: Taking a Relaxing Walk
Besides training, mathematical optimization is also used in deep learning to model and solve formulations over trained neural networks for purposes such as verification, compression, and optimization with learned constraints. However, solving these formulations soon becomes difficult as the network size grows due to the weak linear relaxation and dense constraint matrix. We have seen improvements in recent years with cutting plane algorithms, reformulations, and an heuristic based on Mixed-Integer Linear Programming (MILP). In this work, we propose a more scalable heuristic based on exploring global and local linear relaxations of the neural network model. Our heuristic is competitive with a state-of-the-art MILP solver and the prior heuristic while producing better solutions with increases in input, depth, and number of neurons.
-
15h51 - 16h13
Learning-based cut generation for convex MINLP
In this talk, we present a novel approach to address the computational challenges of solving convex mixed-integer nonlinear programming problems (convex MINLP), particularly in large-scale instances. Traditional methods like outer approximation and cutting-plane techniques often require solving multiple subproblems at each iteration to generate necessary cuts, which can be computationally prohibitive. Our approach leverages machine learning to generate approximate versions of these cuts without solving the subproblems directly, significantly improving efficiency. By embedding machine learning model training into the optimization process, we incorporate general subproblem features into the master problem to generate approximate cuts that guide the optimization process effectively. Additionally, we prove theoretical bounds on the optimality gap of the resulting solutions, ensuring that the decisions remain robust with respect to the original objective function that accounts for exact cuts. We validate the efficacy of our method on real-world problem instances from public transportation, demonstrating its potential to tackle complex, large-scale optimization problems.