2022 Optimization Days
HEC Montréal, Québec, Canada, 16 — 18 May 2022
WC2  Stochastic optimization
May 18, 2022 03:30 PM – 05:10 PM
Location: Trudeau Corporation (green)
Chaired by Romain Couderc
2 Presentations

03:30 PM  03:55 PM
Regularized smoothing for solution mappings of convex problems, with applications to twostage stochastic programming and some hierarchical problems
Many modern optimization problems involve in the objective function solution mappings or optimalvalue functions of other optimization problems. In most/many cases, those solution mappings and optimalvalue functions are nonsmooth, and the optimalvalue function is also possibly nonconvex (even if the defining data is smooth and convex). Moreover, stemming from solving optimization problems, those solution mappings and
valuefunctions are usually not known explicitly, via any closed formulas. We present an approach to regularize and approximate solution mappings of fully parametrized convex optimization problems that combines interior penalty (logbarrier) with Tikhonov regularization. Because the regularized solution mappings are singlevalued and smooth under reasonable conditions, they can also be used to build a computationally
practical smoothing for the associated optimalvalue function and/or solution mapping. Applications are presented to twostage (possibly nonconvex) stochastic programming, and to a certain class of hierarchical decision problems that can be viewed as singleleader multifollower games. 
03:55 PM  04:20 PM
Riskadverse optimization by conditional valueatrisk and stochastic approximation
Engineering design is often faced with uncertainties, making it difficult to determine an optimal design. In an unconstrained context, this amounts to choose the desired tradeoff between risk and performance. In this paper, an optimization problem with an adaptive risk level is stated using the Conditional ValueatRisk (CVaR). Under mild conditions on the objective function and taking advantage of the noise, CVaR allows to smooth the problem. Then, a specific algorithm based on a stochastic approximation scheme is developed to solve the problem. This algorithm has two appealing properties. First, it does not use any estimation of quantile to compute the minimum of the CVaR of the noised objective function. Second, it uses only two function evaluations per iteration regardless of the problem dimension. A proof of convergence to a minimum of CVaR of the objective function is established. This proof is based on martingale theory and does not require any information about the differentiability or continuity of the function. Finally, test problems from the literature are combined in a benchmark set to compare our algorithm to a riskneutral and a worstcase optimization algorithms. These tests prove the ability of the algorithm to be efficient in both cases, especially in large dimension.