Optimization Days 2024
HEC Montréal, Québec, Canada, 6 — 8 May 2024

MB8 - Stochastic optimization II
May 6, 2024 03:30 PM – 05:10 PM
Location: Bamako (green)
Chaired by Janosch Ortmann
4 Presentations
-
03:30 PM - 03:55 PM
A Two-Stage Stochastic Programming Model for the multi-period Blending Problem considering Demand Uncertainty
This research presents mathematical programming formulations for the multi-period blending problem considering an integrated production planning perspective. The decisions relate to both the procurement of the components and the production (i.e., blending) of the end products over multiple periods. The proposed approach advances existing deterministic models by incorporating demand uncertainty via a two-stage stochastic programming model with recourse. The goal is the minimization of overall costs including setup, production, purchase, inventory, and lost sales, while addressing constraints related to inventory balance, machine capacity, and quality stipulations. Numerical tests are conducted with test problems to validate the methodology, where enhancements for the optimization routines are also assessed, including adjustable strategies iterating from candidate solutions, decomposition techniques, and specialized algorithms.
-
03:55 PM - 04:20 PM
Adaptive Partitioning for Chance-Constrained Problems with Finite Support
This presentation considers chance-constrained stochastic problems (CCSPs) with finite support. We present an iterative algorithm that solves a reduced size chance-constrained model. This reduced model is obtained by partitioning the scenario set and, when solved to optimality, yields a bound on the optimal objective value of the original CCSP. Moreover, the algorithm ensures finite termination at an optimal solution of the original CCSP. At the heart of the algorithm lie two fundamental operations: refinement and merging. Refinement involves splitting a subset of the partition, while merging combines two subsets. These operations are executed to enhance the bound obtained in each step of the algorithm while minimally increasing the size of the reduced model to be solved. Under mild conditions, we prove that, for specific refinement and merge operations, the bound obtained after solving each reduced model strictly improves throughout the iterative process. Furthermore, the algorithm structure allows for the seamless integration of various computational enhancements, significantly reducing the computational time required to solve the reduced chance-constrained problems. Based on theoretical arguments we also provide strategies to initialize the partition considered in the algorithm. The efficiency of the algorithm is assessed through numerical experiments. We study the impact of every component of the algorithm and compare its performance against state-of-the-art methods on chance constrained multidimensional knapsack problems.
-
04:20 PM - 04:45 PM
*Dynamic Pricing with rewards programs
We consider a single item dynamic pricing problem for a firm that hosts a rewards program. We model demand as a general linear function with price in which both the intercept and the price sensitivity coefficient are functions of the customers' current point balance. The model accounts for different classes of customers based on their heterogeneous inter-temporal dynamics of their redemption behavior. We model the retailer's decision problem as a dynamic program (DP), where the states are given by the vector of average points' balances for each class. Furthermore, when the number of classes increases, the DP suffers from the curse of dimensionality. We propose an approximation algorithm of the optimal price as a convex combination of the optimal prices for each class separately. Under some assumptions on the demand parameters, we show that a unique optimal threshold discounting policy exists; where the store would discount a product to a particular price only if the average customer's balance of points is above a certain threshold. Furthermore, the solve time of the approximated DP drops from being exponential to being linear in the number of classes. Furthermore, synthetic experiments show the algorithm have a small optimality gap.
-
04:45 PM - 05:10 PM
Test sets to compute opportunity cost matrices
In stochastic programming, scenarios approximate distributions of unknown parameters, but in many applications the number of scenarios required to realistically model the uncertainty makes the optimisation problem numerically intractable. This motivates the scenario reduction problem: by finding a smaller subset of scenarios, reduce the numerical complexity while keeping the error at an acceptable level. Recently, problem-based scenario reduction methods based on opportunity cost dissimilarities have shown to be effective. In this setting, opportunity cost means the cost of wrongly predicting scenario 1 when actually scenario 2 happens and can be viewed as a measure of how different these two scenarios are with respect to the optimisation problem at hand.
In this talk, I will explain how test sets can be used to efficiently compute such opportunity cost matrices and describe new algorithms, based on ideas from algebraic geometry, to efficiently compute these test sets.