Abstract : This minisymposium focuses on the new optimization techniques for application problems. Different application scenarios like machine learning and signal processing will be referred.
[01318] A Unified Single-loop Alternating Gradient Projection Algorithm for Nonconvex-Concave and Convex-Nonconcave Minimax Problems
Format : Talk at Waseda University
Author(s) :
Zi Xu (Shanghai University)
Huiling Zhang (Shanghai University)
Guanghui Lan (Georgia Institute of Technology)
Abstract : Much recent research effort has been directed to the development of efficient algorithms for solving minimax problems with theoretical convergence guarantees due to the relevance of these problems to a few emergent applications.
In this paper, we propose a unified single-loop alternating gradient projection $($AGP$)$ algorithm for solving smooth nonconvex-$($strongly$)$ concave and $($strongly$)$ convex-nonconcave minimax problems. AGP employs simple gradient projection steps for updating the primal and dual variables alternatively at each iteration. We show that it can find an $\varepsilon$-stationary point of the objective function in $O(\varepsilon^{-2})$ $($resp. $O(\varepsilon^{-4})$ $)$ iterations under nonconvex-strongly concave $($resp. nonconvex-concave$)$ setting. Moreover, its gradient complexity to obtain an $\varepsilon$-stationary point of the objective function is bounded by $\mathcal{O}(\varepsilon ^{-2})$ $($resp., $O(\varepsilon^{-4})$$)$ under the strongly convex-nonconcave $($resp., convex-nonconcave$)$ setting. To the best of our knowledge, this is the first time that a simple and unified single-loop algorithm is developed for solving both nonconvex-$($strongly$)$ concave and $($strongly$)$ convex-nonconcave minimax problems. Moreover, the complexity results for solving the latter $($strongly$)$ convex-nonconcave minimax problems have never been obtained before in the literature. Numerical results show the efficiency of the proposed AGP algorithm.
Furthermore, we extend the AGP algorithm by presenting a block alternating proximal gradient $($BAPG$)$ algorithm for solving more general multi-block nonsmooth nonconvex-$($strongly$)$ concave and $($strongly$)$ convex-nonconcave minimax problems. We can similarly establish the gradient complexity of the proposed algorithm under these four different settings.
[01287] Accelerated ADMM-type Methods for Convex and Noncovex Optimization Problems
Format : Talk at Waseda University
Author(s) :
Jianchao Bai (Northwestern Polytechnical University)
Abstract : In this talk, several accelerated stochastic and deterministic Alternating Direction Method of Multipliers, abbreviated as ADMM, are presented for solving separable convex optimization problems whose objective function is the sum of a possibly nonsmooth convex function and a smooth function which is an average of many component convex functions. We also show an advanced inexact accelerated ADMM for solving separable nonsmooth and nonconvex optimization problem. The convergence and complexity of these algorithms are discussed briefly, and a number of large-scale examples have verified the effectiveness of our algorithms compared with state-of-the-art first-order methods.
[01284] On a special discrete phase constrained complex-field optimization problem
Format : Talk at Waseda University
Author(s) :
Cong Sun (Beijing University of Posts and Telecommunications)
Abstract : Reconfigurable intelligent surface (RIS) with discrete phase shifts aided wireless communication network is considered. The resource allocation problem of sum rate maximization with power constraints and discrete phase shifts is solved. The complicated objective function is approximated by its upper bound. The difficult orthogonal constraint is penalized to the objective function through Courant penalty function technique. Two algorithms are proposed for the approximated problem based on alternating direction of multipliers method and proximal gradient method, respectively. Simulations show that the two proposed algorithms achieve higher sum rate with significantly lower computational cost than the sate of the art.
[01368] Stochastic regularized Newton methods for nonlinear equations
Format : Talk at Waseda University
Author(s) :
Jiani Wang (Chinese Academy of Sciences)
Xiao Wang (Peng Cheng Laboratory)
Liwei Zhang (Dalian University of Technology)
Abstract : In this talk, we study finding zeros of nonlinear equations, whose exact function information is normally expensive to calculate but approximations can be easily accessed via calls to stochastic oracles. Based on inexact line search we propose a stochastic regularised Newton method and investigate its global convergence and local convergence rate. We also propose a variant of algorithm incorporating variance reduction scheme and we establish its sample complexity. We also report some numerical results.
[01475] An Efficient Quadratic-Programming Relaxation Based Algorithm for Large-Scale MIMO Detection
Format : Talk at Waseda University
Author(s) :
Ping-Fan ZHAO (Beijing Institute of Technology)
Qing-Na LI (Beijing Insitute of Technology)
Wei-Kun CHEN (Beijing Institute of Technology)
Ya-Feng LIU (Academy of Mathematics and Systems Science, Chinese Academy of Sciences)
Abstract : Massive MIMO has been recognized as a key technology in 5G and beyond communication networks, which can significantly improve the communication performance, but also poses new challenges of solving the corresponding optimization problems due to the large problem size. In this talk, we propose an efficient sparse QP relaxation based algorithm for solving the large-scale MIMO detection problem. With exact recovery guaranteed, the algorithm achieves better detection performance compared with the state-of-art algorithms.
[01487] Uniform Framework of Convergence Analysis for Nesterov's Accelerated Gradient Method
Format : Talk at Waseda University
Author(s) :
Shuo Chen (Academy of Mathematics and Systems Science, Chinese Academy of Sciences)
Abstract : Nesterov's accelerated gradient method $(\texttt{NAG})$ is one of the most influenced methods used in machine learning and other fields. In this talk, we give a new framework to analyze convergence property of $\texttt{NAG}$ based on the high-resolution differential equation, phase-space representation and Lyapunov functions. New convergence rate for gradient norm is revealed in convex case, while the requirement for stepsize is loosed and the convergence rate is improved for strongly convex objective.
[01285] Exact continuous relaxations and algorithms for regularized optimization problems
Author(s) :
Wei Bian (Harbin Institute of Technology)
Abstract : In this talk, we consider two classes of regularized optimization problems, in which the group sparsity is considered. Firstly, we give the continuous relaxation models of the considered problem and establish the equivalence of these two problems in the sense of global minimizers. Then, we define a class of stationary points of the relaxation problem, and prove that any defined stationary point is a local minimizer of the considered regularized problem and satisfies a desirable property of its global minimizers. Further, based on the difference-of-convex (DC) structure of the relaxation problem, we design some corresponding algorithms and prove their convergence properties.