Session Time & Room : 5B (Aug.25, 10:40-12:20) @A206
Type : Proposal of Minisymposium
Abstract : Optimization is a powerful tool to harnesses the power of big data in statistics, machine learning, compressed sensing, etc. Many modern optimization problems involve nonconvexity and nonsmoothness which creates a major gap between the actual solutions being computed and the global optimizers that traditional analysis investigates. Such challenges are new opportunities for researchers to make fundamental contributions to analytical and numerical methods for optimization. This mini-symposium aims to gather researchers with similar interests in optimization and foster in-depth discussions.
Abstract : Motivated by re-weighted $\ell_1$ approaches for sparse recovery, we propose a lifted $\ell_1$ (LL1) regularization
that can be generalized to several popular regularizations in the literature. During the course of reformulating the existing methods into our framework, we discover two types of lifting functions that can guarantee that the proposed approach is equivalent to the $\ell_0$ minimization. Computationally, we design an efficient algorithm via the alternating direction method of multiplier (ADMM) and establish the convergence for an unconstrained formulation. Experimental results are presented to demonstrate how this generalization improves sparse recovery over the state-of-the-art.
[02344] A generalized formulation for group selection via ADMM
Format : Talk at Waseda University
Author(s) :
Sunyoung Shin (Pohang University of Science and Technology)
Chengyu Ke (Southern Methodist University)
Yifei Lou (University of Texas at Dallas)
Miju Ahn (Southern Methodist University)
Abstract : The talk considers a statistical learning model where the model coefficients have a pre-determined group sparsity structure. A loss function is combined with a regularizer to recover the sparsity. We analyze the stationary solution of the formulation, obtaining a sufficient condition for the stationary solution to achieve optimality. We develop an efficient ADMM algorithm, showing the iterates converge to a stationary solution under certain conditions. With the algorithm implemented for GLM, we perform numerical experiments.
[02395] A novel tensor regularization of nuclear over Frobenius norms for low rank tensor recovery
Format : Talk at Waseda University
Author(s) :
Huiwen Zheng (Southern University of Science and Technology)
Yifei Lou (University of Texas at Dallas)
Guoliang Tian (Southern University of Science and Technology)
Chao Wang (Southern University of Science and Technology)
Abstract : In this talk, we consider low-rank tensor recovery (LRTR) problems, which include the low-rank tensor completion (LRTC) problem and the tensor robust principal component analysis (TRPCA) problem. Based on tensor singular value decomposition (t-SVD), we use the ratio of the tensor nuclear norm and tensor Frobenius norm as a new nonconvex surrogate of tensor rank in our models. We adopt the alternating direction method of multipliers (ADMM) to tackle the model and analyze the convergence of the models. Extensive experiments demonstrate the superiority of the proposed models.
[02672] Tractable continuous approximations for a constraint selection problem
Abstract : This presentation introduces a constraint selection problem where the decision-maker solves an optimization problem with a set of constraints that are preferred to be satisfied. We formulate the problem as a cardinality minimization problem (CMP) that penalizes the number of unsatisfied such soft constraints using an indicator function. Our approach reformulates the discrete CMP as continuous problems. We present an equivalent formulation of a mathematical program with complementarity constraints and an approximation as a difference-of-convex program. The stationary solutions of the alternative formulations are investigated, emphasizing the recovery of the local solutions of the CMP. Our numerical study results demonstrate our method's effectiveness in enforcing desired conditions on several applications.