18th INFORMS Computing Society (ICS) Conference
Toronto, Canada, 14 — 16 March 2025
18th INFORMS Computing Society (ICS) Conference
Toronto, Canada, 14 — 16 March 2025

Integrated Learning and Optimization
Mar 14, 2025 08:30 AM – 10:00 AM
Location: East Common
Chaired by Sebastien Andre-Sloan
3 Presentations
-
08:30 AM - 08:52 AM
Generative Differentiable Discrete Event Simulation: End to End Inference and Optimization
Discrete Event Simulation (DES) is a fundamental tool for modeling event-driven systems, including queuing networks, communication systems, and supply chains. Parameter estimation and calibration of DES are crucial tasks, traditionally relying on mathematical analysis and stationarity assumptions. In this work, we introduce a Generative Differentiable DES Framework that incorporates gradient-based optimization for parameter inference and control. Through case studies in queuing theory, we demonstrate its effectiveness by inferring service time distributions in M/M/1 and M/G/1 queues using gradient-based estimation techniques.
-
08:52 AM - 09:14 AM
Integrated Learning and Optimization for Congestion Management and Profit Maximization in Real-Time Electricity Market
We develop novel integrated learning and optimization (ILO) methods to solve economic dispatch (ED) and DC optimal power flow (DCOPF) problems for improved economic operation. ED optimization treats load as an unknown parameter, while DCOPF involves load and the power transfer distribution factor (PTDF) matrix as unknowns. PTDF reflects incremental variations in power flows due to transfers between regions, offering a linearized approximation of line power flows. Our ILO approach addresses post-hoc penalties in electricity markets and line congestion through ED and DCOPF formulations. By capturing real-time market and congestion behavior, the regret function trains unknown bus loads and PTDF matrices to achieve post-hoc goals. Compared to sequential learning and optimization (SLO), which prioritizes forecast accuracy, ILO emphasizes economic performance. Experiments demonstrate ILO’s effectiveness in minimizing penalties and congestion, significantly enhancing economic operation.
-
09:14 AM - 09:36 AM
Bigger PINNs Are Needed for Noisy PDE Training
Physics-Informed Neural Networks (PINNs) are increasingly used to solve various partial differential equations (PDEs), especially in high dimensions. In real-world applications, data samples are noisy, making it essential to understand the conditions under which a predictor can achieve a small empirical risk. In this work, we present a first-of-its-kind lower bound on the size of neural networks required for the supervised PINN empirical risk to fall below the variance of noisy supervision labels. Specifically, we show that to achieve low training error, the number of parameters must be lowerbounded by a little less than one trainable parameter per training sample. Consequently, using more noisy training data alone does not provide a “free lunch” in reducing empirical risk. We investigate PINNs applied to the Hamilton–Jacobi–Bellman (HJB) PDE as a case study. Our findings lay the groundwork for a program on rigorously quantifying parameter requirements for effective PINN training under noisy conditions.