[00891] Derivative-Free Optimization Theory, Methods, and Software
Session Date & Time :
00891 (1/2) : 5B (Aug.25, 10:40-12:20)
00891 (2/2) : 5C (Aug.25, 13:20-15:00)
Type : Proposal of Minisymposium
Abstract : Derivative-free optimization methods aim to solve optimization problems based on function values without using derivatives or other first-order information. They are motivated by problems where the first-order information is expensive or impossible to obtain. Such problems emerge frequently from industrial and engineering applications, including integrated circuit design, aircraft design, and hyperparameter tuning in artificial intelligence. This minisymposium will provide a platform for researchers and practitioners in derivative-free optimization to discuss the recent advances in theory, methods, and software in this area.
Serge Gratton (INPT-ENSEEIHT, University of Toulouse)
Stefan Wild (Applied Math & Computational Research Division, Berkeley Lab)
Tom Ragonneau (The Hong Kong Polytechnic University)
Pengcheng Xie (Academy of Mathematics and Systems Science (AMSS), Chinese Academy of Sciences (CAS),)
Zaikun Zhang (The Hong Kong Polytechnic University)
Talks in Minisymposium :
[01341] COBYQA — A Derivative-Free Trust-Region SQP Method for Nonlinearly Constrained Optimization
Author(s) :
Tom M. Ragonneau (The Hong Kong Polytechnic University)
Zaikun Zhang (The Hong Kong Polytechnic University)
Abstract : This talk introduces COBYQA, a derivative-free trust-region SQP method for nonlinear optimization. The method builds trust-region quadratic models using the derivative-free symmetric Broyden update. An important feature of COBYQA is that it always respects bound constraints. COBYQA is competitive with NEWUOA, BOBYQA, and LINCOA while being able to handle more general problems. Most importantly, COBYQA evidently outperforms COBYLA on all types of problems.
COBYQA is implemented in Python and is publicly available at https://www.cobyqa.com/.
[01372] DFO with Transformed Objectives and a Model-based Trust-region Method
Author(s) :
Pengcheng Xie (Academy of Mathematics and Systems Science (AMSS), Chinese Academy of Sciences (CAS))
Abstract : Derivative-free optimization, i.e., DFO, is the optimization where the derivative information is unavailable. The least Frobenius norm updating quadratic model is an essential under-determined model for derivative-free trust-region methods. We propose DFO with transformed objectives and give a model-based method with the least Frobenius norm model. We prove the existence and necessary and sufficient condition of model optimality-preserving transformations, and analyze the model, interpolation error and convergence property. Numerical results support our model and method.
[01570] PRIMA: Reference Implementation for Powell's methods with Modernization and Amelioration
Author(s) :
Zaikun Zhang (The Hong Kong Polytechnic University)
Abstract : Powell developed five widely used DFO solvers, namely COBYLA, UOBYQA, NEWUOA, BOBYQA, and LINCOA. They were coded in Fortran 77 with a unique style, which poses a significant obstacle to maintaining, exploiting, or extending them. PRIMA is a project providing the reference implementation of these solvers in modern languages. We will present the current stage of PRIMA, including the bugs we have spotted in the Fortran 77 code and the improvements we have achieved.
[03192] A General Blackbox Optimization Framework for Hyperparameter Optimization in Deep Learning
Author(s) :
Edward Hallé-Hannan (Polytechnique Montréal)
Sébastien Le Digabel (Polytechnique Montréal)
Charles Audet (Polytechnique Montréal)
Abstract : Tuning the hyperparameters of a deep model is a mixed-variable BBO problem with an unfixed structure. For instance, the number of layers (a hyperparameter) affects the number of architectural hyperparameters: meta variables are introduced to model this unfixed structure. Moreover, the hyperparameter optimization problem (HPO) may also simultaneously contain categorical, integer and continuous variables. A mathematical framework, which is compatible with direct search methods and Bayesian optimization, is proposed to tackle and model the HPO.
[03615] Stochastic Average Model Methods
Author(s) :
Matt Menickelly (Argonne)
Stefan M Wild (Lawrence Berkeley National Laboratory)
Abstract : We consider finite-sum minimization problems in which the summand functions are computationally expensive, making it undesirable to evaluate all summands on every iteration. We present the idea of stochastic average model methods, which sample component functions according to a probability distribution on component functions that minimizes an upper bound on the variance of the resulting stochastic model. We present promising numerical results on a corresponding extension to the derivative-free model-based trust-region solver POUNDERS.