[00977] Recent advances on sparse optimization: algorithms and applications
Session Time & Room : 5C (Aug.25, 13:20-15:00) @A206
Type : Proposal of Minisymposium
Abstract : Sparse optimization arises from various application problems in statistics and machine learning. In the past decades, the well-known Lasso model and its variants have been extensively studied, and many efficient methods have been well explored correspondingly. Nevertheless, efficient methods for solving more and more difficult models involving sparse structures are still under explored. Considering the high dimension of the application problems, it is important to highly utilize their structures thus to obtain efficient algorithms. In this minisymposium, we focus on recent development of the algorithms and applications of the modern sparse optimization.
[02162] Asymptotically Consistent Linear Convergence Rate of the Randomized Sparse Kaczmarz Method
Format : Talk at Waseda University
Author(s) :
Liang Chen (Hunan University)
Abstract : The sparse Kaczmarz method has drawn much attention from researchers in recent years. This is mainly due to its capability of producing sparse solutions to linear systems, which is a core problem of many applications in the big-data era, such as sparse signal recovery and image processing. This method was shown to be linearly convergent in 2019. However, the convergence rate is not consistent with the well-known convergence rate of the randomized Kaczmarz method. In this work, we try to fix this gap by proposing an asymptotically consistent linear convergence rate for the former.
[02234] A difference-of-convex algorithm for sparse support vector machines in high dimensions
Format : Talk at Waseda University
Author(s) :
Ning Zhang (Dongguan University of Technology)
Abstract : The support vector machine(SVM)is a popular and powerful technique for binary classification. We consider the penalized SVM with a class of difference-of-convex penalties. We show that the difference-of-convex algorithm is guaranteed to produce an oracle estimator in two iterations if the solution to L1-norm SVM is selected as the initial estimator. We further prove that the d.c. algorithm for SCAD/MCP penalized SVM converges to a d-stationary point with local linear convergence rate.
[01567] Frank-Wolfe type methods for a class of nonconvex inequality-constrained problems
Author(s) :
Liaoyuan Zeng (Zhejiang University of Technology)
Yongle Zhang (Sichuan Normal University)
Guoyin Li (University of New South Wales )
Ting Kei Pong (The Hong Kong Polytechnic University)
Abstract : The Frank-Wolfe method and its variants, which implement efficient linear oracles for minimizing smooth functions over compact convex sets, form a prominent class of projection-free first-order methods. In this talk, we extend the Frank-Wolfe method and its away-step variant for minimizing a smooth function over a possibly nonconvex compact set, based on our new generalized linear oracles. We discuss convergence and present numerical performance of our nonconvex Frank-Wolfe type methods for solving matrix completion problems.