18th INFORMS Computing Society (ICS) Conference

Toronto, Canada, 14 — 16 mars 2025

18th INFORMS Computing Society (ICS) Conference

Toronto, Canada, 14 — 16 mars 2025

Horaire Auteurs Mon horaire

Inverse Optimization: Methods and Applications I

15 mars 2025 16h45 – 18h15

Salle: East Common 

Présidée par Kimia Ghobadi

4 présentations

  • 16h45 - 17h07

    Approximations of the Inverse-Feasible Region For Discrete Optimization Problems

    • Jerry Chua, prés., National University of Singapore
    • Ian Zhu, NUS Business School

    Given a feasible solution to an optimization problem, the inverse-feasible region (IFR) describes the set of objective function parameters that renders this solution optimal. For linear and convex optimization problems, the inverse-feasible region of a solution can be defined using standard duality-based reformulations. However, the inverse-feasible region of discrete optimization problems is much more difficult to characterize.

    In this talk, we propose a novel approach for constructing inner approximations of the IFR that bypasses the computational bottlenecks of exact and outer approximation methods. Our approach, based on separating hyperplanes and continuous relaxations, leads to meaningful and improved inner approximations. We demonstrate our findings in a set of numerical experiments for solving inverse binary optimization problems.

  • 17h07 - 17h29

    From Inverse Optimization to Feasibility to ERM

    • Sharan Vaswani, prés., Simon Fraser University
    • Saurabh Mishra, Simon Fraser University
    • Anant Raj, Indian Institute of Science

    Inverse optimization involves inferring unknown parameters of an optimization problem from known solutions and is widely used in fields such as transportation, power systems, and healthcare. We study the contextual inverse optimization setting that utilizes additional contextual information to better predict the unknown problem parameters. We focus on contextual inverse linear programming (CILP), addressing the challenges posed by the non-differentiable nature of LPs. For a linear prediction model, we reduce CILP to a convex feasibility problem allowing the use of standard algorithms such as alternating projections. The resulting algorithm for CILP is equipped with theoretical convergence guarantees without additional assumptions such as degeneracy or interpolation. Next, we reduce CILP to empirical risk minimization (ERM) on a smooth, convex loss that satisfies the Polyak-Lojasiewicz condition. This reduction enables the use of scalable first-order optimization methods to solve large non-convex problems while maintaining theoretical guarantees in the convex setting. Subsequently, we use the reduction to ERM to quantify the generalization performance of the proposed algorithm on previously unseen instances. Finally, we experimentally validate our approach on synthetic and real-world problems and demonstrate improved performance compared to existing methods.

  • 17h29 - 17h51

    Inverse Constrained Markov Decision Processes with Unknown Rewards

    • Qi Luo, University of Iowa
    • Archis Ghate, University of Minnesota
    • Lu Liu, prés., Clemson University

    Constrained Markov Decision Processes (CMDP) extend traditional MDPs by introducing additional constraints, typically evaluated through expectations. We focus on Inverse-CMDP, which estimates reward functions based on observed value functions or policies. Inverse-CMDP has a wide range of applications, such as resource allocation with equity constraints and imitation learning with safety constraints. We propose efficient algorithms to solve Inverse-CMDP problems, depending on the type of available information. Our numerical experiments demonstrate the effectiveness of these algorithms in solving inventory control and budget-constrained multi-armed bandit problems.

  • 17h51 - 18h13

    Inverse optimization with noisy data

    • Kimia Ghobadi, prés., Johns Hopkins University

    For inverse optimization with noisy observations, models often balance a loss function and optimality condition constraints to find the underlying missing parameters of the forward problem. In this work, we instead develop non-asymptotic estimation error bounds. We address the challenges posed by non-convexity and the coupling of variables through penalty methods. Our results provide insights into the trade-offs between model fidelity and computational tractability.

Retour