03:30 PM - 03:55 PM
Regularized smoothing for solution mappings of convex problems, with applications to two-stage stochastic programming and some hierarchical problems
Many modern optimization problems involve in the objective function solution mappings or optimal-value functions of other optimization problems. In most/many cases, those solution mappings and optimal-value functions are nonsmooth, and the optimal-value function is also possibly nonconvex (even if the defining data is smooth and convex). Moreover, stemming from solving optimization problems, those solution mappings and
value-functions are usually not known explicitly, via any closed formulas. We present an approach to regularize and approximate solution mappings of fully parametrized convex optimization problems that combines interior penalty (log-barrier) with Tikhonov regularization. Because the regularized solution mappings are single-valued and smooth under reasonable conditions, they can also be used to build a computationally
practical smoothing for the associated optimal-value function and/or solution mapping. Applications are presented to two-stage (possibly nonconvex) stochastic programming, and to a certain class of hierarchical decision problems that can be viewed as single-leader multi-follower games.
03:55 PM - 04:20 PM
Risk-adverse optimization by conditional value-at-risk and stochastic approximation
Engineering design is often faced with uncertainties, making it difficult to determine an optimal design. In an unconstrained context, this amounts to choose the desired trade-off between risk and performance. In this paper, an optimization problem with an adaptive risk level is stated using the Conditional Value-at-Risk (CVaR). Under mild conditions on the objective function and taking advantage of the noise, CVaR allows to smooth the problem. Then, a specific algorithm based on a stochastic approximation scheme is developed to solve the problem. This algorithm has two appealing properties. First, it does not use any estimation of quantile to compute the minimum of the CVaR of the noised objective function. Second, it uses only two function evaluations per iteration regardless of the problem dimension. A proof of convergence to a minimum of CVaR of the objective function is established. This proof is based on martingale theory and does not require any information about the differentiability or continuity of the function. Finally, test problems from the literature are combined in a benchmark set to compare our algorithm to a risk-neutral and a worst-case optimization algorithms. These tests prove the ability of the algorithm to be efficient in both cases, especially in large dimension.