Journées de l'optimisation 2022
HEC Montréal, Québec, Canada, 16 — 18 mai 2022
WB8 - Derivative-free and Blackbox Optimization V
18 mai 2022 13h30 – 15h10
Salle: METRO INC. (jaune)
Présidée par Edward Hallé-Hannan
4 présentations
-
13h30 - 13h55
Towards a unified Gaussian process kernel-based correlation matrix representation for mixed-categorical variables
Recently, there has been a growing interest for mixed categorical meta-models based on Gaussian process (GP) surrogates. In this setting, several existing approaches use different strategies. Among the recently developed methods, we could cite: continuous relaxation of the variables, Gower distance based model or GP model based on direct estimation of the correlation matrix (i.e., using homoscedastic hypersphere model).
In this paper, we will present a kernel-based approach that will lead to a unified approach for many existing GP approximation methods. It extends the paradigm used to construct correlation matrices for continuous inputs to cover the mixed categorical case. This approach is presented for the Gaussian kernel but it can easily extend to other kernel choices. The good potential of the proposed framework is shown on different analytical test cases where our homogeneous model gives results that are similar to the different approaches.
-
13h55 - 14h20
Tabu search hyperparameters tuning with blackbox optimization
Hyperparameters have a great impact on the performance of metaheuristics. The best values for these hyperparameters cannot be identified with a trial-and-error or empirical approach when the search space is large. Research shows that hyperparameter tuning is a nontrivial task and efficient methods are required to obtain the best possible results. In this study, we investigate how blackbox optimization can help choose the tabu search parameters efficiently in a physician scheduling problem.
-
14h20 - 14h45
Hyperparameter optimization of machine learning models for COVID-19 forecasting
COVID-19 forecasting is challenging due to the large amount of variability and uncertainty surrounding its transmission. We present several machine learning models for forecasting the pandemic trajectory using a viral load measurement from a cross-sectional sample of patients. The models were tuned using a combination of grid search and the stochastic mesh adaptive direct search (StoMADS) algorithm. Due to the heuristic nature of the training algorithms in machine learning models, the hyperparameter optimization problem is typical of stochastic derivative-free optimization. The problem involves a computationally expensive noisy black-box that requires ample computational resources and parallel computing to execute. The model development strategy shown in this work can set a precedent for the use and development of stochastic derivative-free optimization algorithms for managing problems involving similar black-boxes.
-
14h45 - 15h10
Notation framework for hyperparameter optimization in deep learning
The uprising of deep learning has brought new challenges in the blackbox optimization (BBO) community. Notably, tuning the hyperparameters of a deep model is a mixed-variable BBO problem with an unfixed structure. For instance, the number of hidden layers, which is itself a hyperparameter, affects the number of hyperparameters regarding the architecture of the model. Moreover, the hyperparameter optimization problem (HPO) may simultaneously contain categorical, integer and continuous variables. In conjunction, the diversity of variable types and the unfixed structure of the HPO create a substantial challenge in a BBO context. To tackle the HPO, we meticulously developed a notation framework that properly models that type of problem. Many algorithmic subtleties and implications are outlined by the notation framework, which ease the development of optimization methods.