10h30 - 10h55
Sequential Model-Based Optimization for General Algorithm Configuration
State-of-the-art algorithms for hard computational problems often expose many parameters that can be modified to improve empirical performance. However, manually exploring the resulting combinatorial space of parameter settings is tedious and tends to lead to unsatisfactory outcomes. Recently, automated approaches for solving this algorithm configuration problem have led to substantial improvements in the state of the art for solving various problems. One promising approach constructs explicit regression models to describe the dependence of target algorithm performance on parameter settings; however, this approach has so far been limited to the optimization of few numerical algorithm parameters on single instances. In this talk, we discuss an extension of this paradigm to general algorithm configuration problems: we allow many categorical parameters, large non-Gaussian noise, and optimize across sets of instances. We experimentally validate our new algorithm configuration procedure by optimizing a local search and a tree search solver for the propositional satisfiability problem (SAT), as well as the commercial mixed integer programming (MIP) solver CPLEX. In these experiments, our procedure yielded state-of-the-art performance, and in many cases outperformed the previous best configuration approach.
10h55 - 11h20
Taking Advantage of Parallelism in a Parameter Optimization Framework
OPAL is an algorithm optimization framework in which identification of good algorithmic parameters is formulated as a blackbox optimization problem and solved by a direct search optimizer. The evaluation of the quality of algorithmic parameters is done by launching the algorithm on a collection of test problems, and is usually a time consuming task. The talk focuses on various ways to use parallelism in order to reduce the overall computational time. This is motivated by the fact that each evaluation of trial parameters requires launching a potentially large number of independent tasks. Parallelism can be used during the evaluation of the trial parameters, within the blackbox solver (such as the Mesh Adaptive Direct Search solver), or in both. Our methodology is illustrated on five algorithmic parameters of a trust-region method for unconstrained optimization.
11h20 - 11h45
Templating for Automatic Algorithm Optimization
With the optimization of the performance of linear algebra kernels in mind, we propose a Python tool chain that allows to write low-level kernels in the C programming language while explicitly exposing parameters which may influence performance. Code is automatically generated by specializing templates based on specific parameter values, and is automatically compiled to a shared library instantly available within the Python code. The OPAL framework conducts the optimization by searching the parameter space.
11h45 - 12h10
New PI Controller Tuning Methods using Multi-Objective Optimization
New tuning methods are developed for a PI controller for processes represented by a First-Order Plus Dead Time (FOPDT) transfer function. The developed methods involve approximating the Pareto domain associated with the minimization of three performance criteria. Two tuning methods were developed, achieving optimal controller performance by specifying either one of the controller input parameters or the desired values of the performance criteria. The developed controller tuning methods were compared to several previously developed controller correlations. Finally, the tuning methods were applied to a fourth order process subjected to a set point change and a disturbance, and demonstrated excellent performance.