10th International Conference on Computational Management

HEC Montréal, 1 — 3 May 2013

10th International Conference on Computational Management

HEC Montréal, 1 — 3 May 2013

Schedule Authors My Schedule

WC2 Robust Optimization II

May 1, 2013 04:00 PM – 05:30 PM

Location: St-Hubert

Chaired by Vishal Gupta

3 Presentations

  • 04:00 PM - 04:30 PM

    On the Power of Randomization in Robust Optimization and Network Interdiction

    • Ebrahim Nasrabadi, presenter, Massachusetts Institute of Technology
    • Dimitris Bertsimas, Massachusetts Institute of Technology
    • James Orlin, Sloan School of Management, MIT

    Decision makers often need to make decisions in the face of uncertainties. In robust optimization, the decision maker chooses a solution, assuming that nature selects adaptively a response after observing the decision maker's choice so as to maximize the cost for the decision maker. Here we adopt the notion of mixed strategies from two person game theory. Rather than choosing a decision, the decision maker assigns a probability to each solution and selects a solution randomly according to the assigned probabilities. We provide insights into the modeling power, performance bounds, tractability, and complexity of robust optimization problems when mixed strategies are permitted. We apply our approach to network interdiction problems, in which the goal is to remove a fixed number of arcs of the network so as to optimally reduce the maximum possible flow in the network.

  • 04:30 PM - 05:00 PM

    Big Data and Robust Optimization

    • Vishal Gupta, presenter, Massachusetts Institute of Technology (MIT)
    • Dimitris Bertsimas, Massachusetts Institute of Technology
    • Nathan Kallus, Massachusetts Institute of Technology (MIT)

    Traditional proposals for constructing uncertainty sets for robust optimization are well-suited to ``data-poor" environments, i.e. when very little is known about the probability distribution generating the uncertainty. On the other hand, the last decade has seen an explosion in the availability of data as part of the ``big data revolution." In other words, many applications are now in ``data-rich" environments, not data-poor ones. We propose new techniques to tailor robust optimization to this new paradigm.

    Specifically, we propose a novel approach to constructing uncertainty sets for robust optimization from data using statistical hypothesis tests. The approach is flexible and widely applicable, and the constructed sets are computationally tractable both theoretically and practically. Each of our new sets enjoys strong, finite-sample, probabilistic guarantees with respect to the unknown distribution that generated the data. Most importantly, these sets perform well in real world applications. We illustrate the flexibility of the method through applications from supply-chain management and the market clearing mechanism from New England's electricity market. Using real and simulated data, we show that our new data-driven sets significantly outperform conventional robust optimization techniques whenever data is available.

  • 05:00 PM - 05:30 PM

    Robust Maximum Likelihood Estimators

    • Omid Nohadani, presenter, Purdue University
    • Dimitris Bertsimas, Massachusetts Institute of Technology
    • Apostolos Fertis, ETH Zurich

    We use robust optimization principles to construct robust maximum likelihood estimators that are immune against data errors. We model two types of errors: a) adversarial type, modeled using the notion of uncertainty sets, and b) stochastic type errors, modeled via random perturbations.
    We provide efficient local and global search algorithms to compute the robust estimators and discuss them in detail in the case of the multivariate normal distribution. Using computer simulations, we demonstrate that the proposed robust algorithms provide estimators that are robust against both types of data uncertainty and improved compared to classical maximum likelihood estimators which degrade significantly, when errors are encountered. We compare the local and the global algorithms and show that for smaller errors, both algorithms are comparable in terms of performance. For larger size errors, however, we show that the simulated annealing based global algorithm leads to improved performance compared to the local algorithm. We also probe the sensitivity of the estimators with respect to the sources of errors and show that the proposed estimators remain robust, even when the errors follow a different distribution than anticipated.

Back