18th INFORMS Computing Society (ICS) Conference

Toronto, Canada, 14 — 16 March 2025

18th INFORMS Computing Society (ICS) Conference

Toronto, Canada, 14 — 16 March 2025

Schedule Authors My Schedule

The Intersection of Modeling and Optimization with Machine Learning

Mar 16, 2025 01:00 PM – 02:30 PM

Location: South Sitting

Chaired by Joaquim Dias Garcia

3 Presentations

  • 01:00 PM - 01:22 PM

    PyEPO: A Predict-then-Optimize Library for Linear and Integer Programming

    • Bo Tang, presenter, University of Toronto
    • Khalil Elias, Polytechnique Montreal / University of Toronto

    In deterministic optimization, it is typically assumed that all problem parameters are fixed and known. In practice, however, some parameters may be a priori unknown but can be estimated from contextual information. A typical predict-then-optimize approach separates predictions and optimization into two distinct stages. Recently, end-to-end predict-then-optimize has emerged as an attractive alternative. This work introduces the PyEPO package, a \pytorch-based end-to-end predict-then-optimize library in Python. To the best of our knowledge, PyEPO is the first such generic tool for linear and integer programming with predicted objective function coefficients. It includes various algorithms such as surrogate decision losses, black-box solvers, and perturbated methods. PyEPOoffers a user-friendly interface for defining new optimization problems, applying state-of-the-art algorithms, and using custom neural network architectures. We conducted experiments comparing various methods on problems such as the Shortest Path, the Multiple Knapsack, and the Traveling Salesperson Problem, discussing empirical insights that may guide future research. PyEPO and its documentation are available at https://github.com/khalil-research/PyEPO.

  • 01:22 PM - 01:44 PM

    Extending DiffOpt.jl for Non-Convex Differentiable Optimization

    • Andrew Rosemberg, presenter, Georgia Tech
    • Joaquim Dias Garcia, PSR
    • Robert Parker, Los Alamos National Laboratory
    • Pascal Van Hentenryck, Georgia Tech
    • Russell Bent, Los Alamos
    • Kaarthik Sundar, Los Alamos
    • François Pacaud , Mines Paris

    The growing integration of optimization with gradient-based learning highlights the need for differentiable optimization techniques to address real-world complexity. Existing tools like DiffOpt.jl excel in differentiating convex problems but struggle with non-convex dynamics encountered in more intricate applications. This work bridges that gap by extending DiffOpt.jl to support non-convex optimization, broadening its applicability.
    By leveraging the implicit function theorem, we differentiate through the Karush-Kuhn-Tucker (KKT) conditions to compute sensitivities with respect to input parameters. Our approach benefits from DiffOpt.jl's integration with cutting-edge solvers, including GPU-accelerated options, and its seamless compatibility with Julia's high-performance ecosystem. These advancements make it ideal for fields such as machine learning, engineering, and finance, where decision-making often involves non-convex optimizations.
    Our results demonstrate the robustness and flexibility of this method, highlighting its utility in differentiable programming pipelines. This presentation will cover the theoretical foundations, implementation challenges, and performance benchmarks of the proposed approach.

  • 01:44 PM - 02:06 PM

    ApplicationDrivenLearning.jl: A High-Performance Library for Training Predictive Models Based on the Application Cost

    • Joaquim Dias Garcia, presenter, PSR

    The Application Driven Learning framework is a "closed-loop" framework that integrates predictive model training directly with decision-making processes, optimizing models specifically for the application context.

    We present ApplicationDrivenLearning.jl, a high-performance Julia package that enables efficient experimentation and implementation of the closed-loop framework, particularly for large-scale decision-making problems. The package allows user to apply the novel gradient based heuristic and the two original methods: the heuristic based on Nelder-Mead and Bilevel Optimization. Moreover, the heuristics have also been parallelized allowing the user to optimize their models in high-performance computing (HPC) clusters.

    To demonstrate the usage of the package, we present a case study contrasting the multiple implementations that are available to the users.

Back