Registered Data
Contents
- 1 [CT165]
- 1.1 [02410] An efficient method for finding the solution of physical distribution problems
- 1.2 [00403] Exact Penalization at Stationary Points of Sparse Constrained Problem
- 1.3 [00995] Convergence of a Normal Map-Based Prox-SGD Method for Stochastic Composite Optimization
- 1.4 [02116] Generalized Polyak Step Size for First Order Optimization with Momentum
- 1.5 [02421] An approximation scheme for Multistage Stochastic Variational Inequalities
[CT165]
[02410] An efficient method for finding the solution of physical distribution problems
- Session Date & Time : 1E (Aug.21, 17:40-19:20)
- Type : Contributed Talk
- Abstract : The transportation problem is an important optimization problem which transports the certain amount of products from sources to demand points with minimum total cost. The aim of this paper is to introduce a new method to find the best and quick initial basic feasible solution of both balanced and unbalanced transportation problems. The proposed method always gives either optimal solution or nearest to optimal solution which is illustrated with two examples along with results comparisons.
- Classification : 90C05, 90C90
- Author(s) :
- Gourav Gupta (Lovely Professional University, Punjab)
[00403] Exact Penalization at Stationary Points of Sparse Constrained Problem
- Session Date & Time : 1E (Aug.21, 17:40-19:20)
- Type : Contributed Talk
- Abstract : Nonconvex sparse optimization problems with the trimmed l1 norm or truncated nuclear norm, which is a penalty function of cardinality or rank constraint, have been actively studied. A unified framework that includes all the existing trimmed l1-penalized problems is introduced. We show that under mild conditions, any d-stationary point of the penalized problem satisfies the corresponding constraint. Our result is superior to almost all existing results, especially from the viewpoint of practice.
- Classification : 90C06, 90C26, 90C30, 90C46, 90C90
- Author(s) :
- Shotaro Yagishita (Chuo University)
- Shotaro Yagishita (Chuo University)
- Jun-ya Gotoh (Chuo University)
[00995] Convergence of a Normal Map-Based Prox-SGD Method for Stochastic Composite Optimization
- Session Date & Time : 1E (Aug.21, 17:40-19:20)
- Type : Contributed Talk
- Abstract : In this talk, we present a novel stochastic normal map-based algorithm (nor-SGD) for nonconvex composite-type optimization problems and discuss its asymptotic convergence properties. We first analyze the global convergence behavior of nor-SGD and show that every accumulation point of the generated sequence of iterates is a stationary point almost surely and in an expectation sense. The obtained results hold under standard assumptions and extend the more limited convergence guarantees of nonconvex prox-SGD. In addition, based on the Kurdyka-Lojasiewicz (KL) framework and utilizing an adaptive time window mechanism, we establish almost sure convergence of the iterates and derive convergence rates that depend on the KL exponent and the step size dynamics. The techniques studied in this work can be potentially applied to other families of stochastic and simulation-based algorithms.
- Classification : 90C06, 90C15, 90C26
- Author(s) :
- Andre Milzarek (The Chinese University of Hong Kong, Shenzhen)
- Junwen Qiu (The Chinese University of Hong Kong, Shenzhen)
[02116] Generalized Polyak Step Size for First Order Optimization with Momentum
- Session Date & Time : 1E (Aug.21, 17:40-19:20)
- Type : Contributed Talk
- Abstract : This paper presents a general framework to set the learning rate adaptively for first-order optimization methods with momentum, motivated by the derivation of Polyak step size. It is shown that the resulting methods are much less sensitive to the choice of momentum parameter and may avoid the oscillation of the heavy-ball method on ill-conditioned problems. These adaptive step sizes are further extended to the stochastic settings, which are attractive choices for stochastic gradient descent with momentum. Our methods are demonstrated to be more effective for stochastic gradient methods than prior adaptive step size algorithms in large-scale machine learning tasks.
- Classification : 90C15, 65K05, 90C06
- Author(s) :
- Xiaoyu Wang (Hong Kong University of Science and Technology)
- Mikael Johansson (KTH Royal Institute of Technology)
- Tong Zhang (Hong Kong University of Science and Technology)
[02421] An approximation scheme for Multistage Stochastic Variational Inequalities
- Session Date & Time : 1E (Aug.21, 17:40-19:20)
- Type : Contributed Talk
- Abstract : We address the numerical resolution of multistage stochastic variational inequalities in the formulation of Rockafellar and Wets. Based on hybrid proximal point methods, we introduce an inexact progressive hedging algorithm allowing the generated subproblems to be approximately solved with an implementable tolerance condition. Our scheme generalizes a well-known exact algorithm introduced by Rockafellar and Sun, providing also stronger convergence results. We show some numerical experiments in two-stage Nash games.
- Classification : 90C15, 65K15, 49M29
- Author(s) :
- Emelin Liliana Buscaglia (Universidad Nacional de Rosario - CONICET)
- Pablo Andrés Lotito (Universidad Nacional del Centro de la Provincia de Buenos Aires - CONICET)
- Lisandro Armando Parente (Universidad Nacional de Rosario - CONICET)