18th INFORMS Computing Society (ICS) Conference
Toronto, Canada, 14 — 16 mars 2025
18th INFORMS Computing Society (ICS) Conference
Toronto, Canada, 14 — 16 mars 2025

Distributionally Robust Optimization Approaches
15 mars 2025 16h45 – 18h15
Salle: Debates
Présidée par Beste Basciftci
4 présentations
-
16h45 - 17h07
Distributionally Robust Performative Optimization
In this work, we propose a general distributionally robust framework for performative optimization, where the selected decision can influence the probabilistic distribution of uncertain parameters. Our framework facilitates safe decision-making in scenarios with incomplete information about the underlying decision-dependent distributions, relying instead on accessible reference distributions. To tackle the challenge of decision-dependent uncertainty, we introduce an algorithm named repeated robust risk minimization. This algorithm decouples the decision variables associated with the ambiguity set from the expected loss, optimizing the latter at each iteration while keeping the former fixed to the previous decision. By leveraging the strong connection between distributionally robust optimization and regularization, we establish a linear convergence rate to a performatively stable point and provide a suboptimality performance guarantee for the proposed algorithm. Finally, we examine the performance of our proposed model through an experimental study in strategic classification.
-
17h07 - 17h29
Fractional Distributionally Robust Optimization with Moment-Based and Wasserstein-Based Ambiguity Sets
We study a novel class of fractional distributionally robust optimization, namely P-DRO, which maximizes the worst-case probability of exceeding of a reward-risk ratio. We consider two types of ambiguity sets, moment-based and Wasserstein-based and two types of support, uncertain probabilities and continuum of realizations. Under specific conditions of the reward-risk functions, we develop new convex and biconvex reformulations for each type of ambiguity sets. We derive a new variant of Dinkelback's algorithm to solve the biconvex reformulations and illustrate the efficiency and scalability of the proposed method in a porforlio optimization.
-
17h29 - 17h51
Residuals-Based Contextual Distributionally Robust Optimization with Decision-Dependent Uncertainty
We consider a residuals-based distributionally robust optimization model, where the uncertainty depends on covariate information and our decisions. We adopt regression models to learn the decision dependency and construct a nominal distribution (thereby ambiguity sets) around the learned model using empirical residuals from the regressions. Ambiguity sets can be formed via the Wasserstein distance, a sample robust approach, or with the same support as the nominal empirical distribution (e.g., phi-divergences), where the nominal distribution and the radii of the ambiguity sets could be decision- and covariate-dependent. We provide conditions under which desired statistical properties, such as asymptotic optimality, rates of convergence, and finite sample guarantees, are satisfied. Via cross-validation, we devise data-driven approaches to find the best radii for different ambiguity sets, which can be decision-(in)dependent and covariate-(in)dependent. Through numerical experiments, we illustrate the effectiveness of our approach and the benefits of integrating decision dependency into a residuals-based DRO framework.
-
17h51 - 18h13
What's hidden in the tails? Revealing and reducing optimistic bias in entropic risk estimation and optimization
The entropic risk measure is commonly used in high-stakes decision-making to account for tail risks. Empirical entropic risk estimator that replaces expectation in the entropic risk measure with sample average underestimates true risk. To correct this bias, a strongly asymptotically consistent bootstrapping procedure is proposed that fits a distribution to the data and then estimates the bias in the empirical estimator via bootstrapping. Two methods are proposed to fit a distribution to the data, a computationally intensive one that fits the distribution of empirical entropic risk, and a simpler one that fits the tail of the empirical distribution. The approach is applied to a distributionally robust entropic risk minimization problem with type-∞ Wasserstein ambiguity set, demonstrating improved calibration of size of the ambiguity set using debiased validation performance. In an insurance contract design problem, the proposed estimators reduce out-of-sample risk for insurers since they suggest more accurate premiums.