Journées de l'optimisation 2022

HEC Montréal, Québec, Canada, 16 — 18 mai 2022

Horaire Auteurs Mon horaire

TB8 - Bayesian optimization

17 mai 2022 15h30 – 17h10

Salle: METRO INC. (jaune)

Présidée par Carolina Osorio

4 présentations

  • 15h30 - 15h55

    Random Hypervolume Scalarizations for Provable Multi-Objective Black Box Optimization

    • Richard Zhang, prés., Google

    Single-objective black box optimization (also known as zeroth-order optimization) is the process of minimizing a scalar objective f(x), given evaluations at adaptively chosen inputs x. In this paper, we consider multi-objective optimization, where f(x) outputs a vector of possibly competing objectives and the goal is to converge to the Pareto frontier. Quantitatively, we wish to maximize the standard hypervolume indicator metric, which measures the dominated hypervolume of the entire set of chosen inputs. In this paper, we introduce a novel scalarization function, which we term the hypervolume scalarization, and show that drawing random scalarizations from an appropriately chosen distribution can be used to efficiently approximate the hypervolume indicator metric. We utilize this connection to show that Bayesian optimization with our scalarization via common acquisition functions, such as Thompson Sampling or Upper Confidence Bound, provably converges to the whole Pareto frontier by deriving tight hypervolume regret bounds on the order of sqrt(T). Furthermore, we highlight the general utility of our scalarization framework by showing that any provably convergent single-objective optimization process can be effortlessly converted to a multi-objective optimization process with provable convergence guarantees.

  • 15h55 - 16h20

    Prior learning for Bayesian optimization

    • Zi Wang, prés., Google Research

    Optimization is one of the fundamental pillars for machine learning, and yet, machine learning can also contribute to better optimization methods. Bayesian optimization can be seen as the latter case. In this talk, we will dive into the regime where the benefit of machine learning is even more strengthened for optimization: we use prior learning to fix the critical drawback of poorly specified priors in Bayesian optimization. I will show both the theoretical understanding of Bayesian optimization with prior learning, as well as empirical successes.

  • 16h20 - 16h45

    Enhancing the scalability of Bayesian optimization for high-dimensional urban traffic problems

    • Carolina Osorio, prés., HEC Montréal & GERAD

    In this talk, we consider high-dimensional traffic signal control problems that arise in congested metropolitan areas. We focus on the use of high-resolution urban mobility stochastic simulators and formulate the control problems as high-dimensional continuous simulation-based optimization (SO) problems. We discuss the opportunities and challenges of designing SO algorithms for these problems. An important component in high-dimensional problems is the exploration-exploitation tradeoff. We discuss work that has focused on improving the exploitation capabilities of SO algorithms. We then present novel exploration techniques suitable for high-dimensional spaces. We consider a Bayesian optimization setting, and propose the use of a simple analytical traffic model to specify the covariance function of a Gaussian process. We show how this enables the Bayesian optimization method to more efficiently sample in high-dimensional spaces. We present validation experiments on synthetic low-dimensional problems. We then apply the method to a high-dimensional traffic control problem for Midtown Manhattan, in New York City.

  • 16h45 - 17h10

    Selection of the most probable best

    • Eunhye Song, prés., Penn State University
    • Taeho Kim, Penn State University
    • Kyoung-kuk Kim, KAIST

    Ranking and selection (R&S) is a simulation optimization framework to choose a solution that has the largest simulation output mean among finite candidate solutions. A classical R&S problem assumes the only source of uncertainty is stochasticity in simulation outputs. However, in practice, a simulation model may be subject to model uncertainty when its key parameter values are estimated from finite data. The most probable best (MPB) is a new Bayesian optimum framework for an R&S problem under model uncertainty. Capturing uncertainty about the parameter with a posterior distribution given data, the MPB is defined as the solution whose posterior probability of being the best is the largest. Finding the MPB is equivalent to solving a nested R&S problem: given each realized parameter value, the inner-level R&S problem is solved to find the conditional optimum at the parameter; at the outer level, we select the solution that most frequently wins the inner-level R&S problems. To design a sequential sampling policy to find the MPB as efficiently as possible, we characterize the asymptotic rate-optimal sampling strategy. Then, we design several sequential sampling algorithms that achieve the optimality conditions asymptotically. Application to a supply chain example will be demonstrated.

Retour