# Registered Data

## [02012] Splitting Optimization: Theory, Methodology and Application

**Session Date & Time**:- 02012 (1/4) : 2E (Aug.22, 17:40-19:20)
- 02012 (2/4) : 3C (Aug.23, 13:20-15:00)
- 02012 (3/4) : 3D (Aug.23, 15:30-17:10)
- 02012 (4/4) : 3E (Aug.23, 17:40-19:20)

**Type**: Proposal of Minisymposium**Abstract**: With the development of artificial intelligence, big data and other applications, large-scale optimization problems have received more and more attention. The advantage of the splitting method is to make use of the problem structure to reasonably disassemble the large-scale optimization problem into a series of subproblems, and to solve the large-scale optimization problem efficiently through the "decomposition-integration" architecture. This minisymposium introduces splitting optimization through a series of reports, including theories, methods, and applications.**Organizer(s)**: Deren Han, Xingju Cai, Xiangfeng Wang**Classification**:__90C30__,__65K05__**Speakers Info**:- Fan Jiang (Nanjing University of Information Science and Technology)
- Deren Han (Beihang University)
- Wenxing Zhang (University of Electronic Science and Technology of China)
- Junfeng Yang (Nanjing University)
- Guoyong Gu (Nanjing University)
- Lingling Xu (Nanjing Normal University)
- Xingju Cai (Nanjing Normal University)
- Liping Wang (Nanjing University of Aeronautics and Astronautics)
- Zehui Jia (Nanjing University of Information Science and Technology)
**Xiangfeng Wang**(East China Normal University)

**Talks in Minisymposium**:**[02878] Decentralized Entropic Optimal Transport for Privacy-preserving Distributed Distribution Comparison****Author(s)**:**Xiangfeng Wang**(East China Normal University)

**Abstract**: Privacy-preserving distributed distribution comparison measures the distance between the distributions whose data are scattered across different agents in a distributed system and cannot be shared among the agents. In this study, we propose a novel decentralized entropic optimal transport (EOT) method, which provides a privacy-preserving and communication-efficient solution to this problem with theoretical guarantees. In particular, we design a mini-batch randomized block-coordinate descent (MRBCD) scheme to optimize the decentralized EOT distance in its dual form. The dual variables are scattered across different agents and updated locally and iteratively with limited communications among partial agents. The kernel matrix involved in the gradients of the dual variables is estimated by a distributed kernel approximation method, and each agent only needs to approximate and store a sub-kernel matrix by one-shot communication and without sharing raw data. We analyze our method's communication complexity and provide a theoretical bound for the approximation error caused by the convergence error, the approximated kernel, and the mismatch between the storage and communication protocols. Experiments on synthetic data and real-world distributed domain adaptation tasks demonstrate the effectiveness of our method.

**[02881] An alternative extrapolation scheme of PDHGM for saddle point problem with nonlinear function****Author(s)**:**Wenxing Zhang**(University of Electronic Science and Technology of China)

**Abstract**: Primal-dual hybrid gradient method (PDHG) is a canonical and popular prototype for solving saddle point problem (SPP). However, the nonlinear coupling term in SPP excludes the application of PDHG on far-reaching real-world problems. In this talk, we devise a variant iterative scheme for solving SPP with nonlinear function by exerting an alternative extrapolation procedure. Under the metrically regular assumption on KKT mapping, we simplify the local convergence of the proposed method on contractive perspective. Numerical simulations on a PDE-constrained nonlinear inverse problem and parametric blind deconvolution demonstrate the compelling performance of the proposed method.

**[02882] A balanced Douglas-Rachford splitting algorithm for convex minimization****Author(s)**:**Xingju Cai**(Nanjing Normal University)

**Abstract**: The Douglas-Rachford algorithm is a classical and effective splitting method to solve the inclusion problems. Recently, an adaptive Douglas-Rachford splitting algorithm is proposed for the monotone inclusion, which allow one operator be weakly monotone. We apply the idea of adaptive Douglas-Rachford splitting method (ADRSM) to differentiable convex optimization problems with abstract constraints, and more attractive results can be obtained for the convex optimization problem. We propose accurate and inaccurate versions of the algorithm respectively, and prove the global convergence of the algorithms. We extend these results to two separable convex optimization problems with linear constraints. In numerical experiments, we compare our algorithms with other commonly used algorithms and the results verify the effectiveness of our algorithms.

**[02904] A projection-like method for quasimonotone variational inequalities without Lipschitz continuity****Author(s)**:**Lingling Xu**(Nanjing Normal University)- Xiaoxi Jia (Institute of Mathematics, University of würzburg,)

**Abstract**: For most projection methods, the operator of a variational inequality problem is assumed to bemonotone (or pseudomonotone) and Lipschitz continuous. In this paper, we present a projection-like method to solve quasimonotone variational inequality problems without Lipschitz continuity. Under some mild assumptions, we prove that the sequence generated by the proposed algorithm converges to a solution. Numerical experiments are provided to show the effectiveness of the method.

**[02918] A trust-region-based splitting method for linear constrained programs****Author(s)**:**Deren Han**(Beihang University)

**Abstract**: The augmented Lagrangian based splitting methods have found more and more applications in scientific and engineering computation, such as compressive sensing, covariance selection, image processing and transportation research. One of the basic difficulties in such algorithms is the selection of the parameter in the augmented Lagrangian function, a larger one may make the primal progress too small while a small one may slow down the dual progress. To overcome this difficulty, in this paper, we propose to solve the splitting subproblems in a trust-region manner, and the radius can be adjusted smartly. Under the same mild conditions as those for classical augmented Lagrangian based splitting methods, we prove the global convergence of the proposed algorithm. Moreover, the $\mathcal{O}(1/\epsilon)$ convergence rate is also analyzed in an ergodic sense. We present some preliminary numerical experiments on medical image recovery and traffic assignment, which show that the trust-region-based splitting method is efficient and promising.

**[02995] A Decentralized Second-Order Multiplier Algorithm with Quasi-Newton Tracking****Author(s)**:**Liping Wang**(Nanjing University of Aeronautics and Astronautics)

**Abstract**: In this talk, we consider a decentralized optimization problem of minimizing a finite sum of strongly convex and twice continuously differentiable functions over a fixed connected undirected network. We propose a fully decentralized second-order multiplier algorithm (DSOM) where second-order information are approximated by simple algebraic calculations and at most matrix-vector products. It is the first work to study the second-order multiplier method in decentralized optimization which is equivalent to the primal-dual method with twice primal quasi-Newton steps and once dual BB step per iteration. Additionally, in DSOM the local direction on each node asymptotically approximates the global quasi-Newton direction. Under some mild assumptions, the proposed algorithm is illustrated to have global linear convergence rate. Numerical results are also reported for verifying its effectiveness.

**[03066] Tight Convergence Rate in Subgradient Norm of the Proximal Point Algorithm****Author(s)**:**Guoyong Gu**(Nanjing University)- Junfeng Yang (Nanjing University)

**Abstract**: Proximal point algorithm has found many applications, and it has been playing fundamental roles in the understanding, design, and analysis of many first-order methods. In this paper, we derive the tight convergence rate in subgradient norm of the proximal point algorithm, which was conjectured by Taylor, Hendrickx and Glineur $[$SIAM J. Optim., 27 (2017), pp. 1283--1313$]$.

**[03073] A fast PIRNN algorithm for nonconvex low-rank matrix minimization problems****Author(s)**:**Zhili Ge**(Nanjing Normal University of Special Education)

**Abstract**: In this paper, we propose a fast PIRNN algorithm with extrapolation for solving a class of nonconvex low-rank matrix minimization problems. The proposed method incorporates two different extrapolation steps with respect to the previous iterations into the backward proximal step and the forward gradient step of the classic proximal iteratively reweighted method. We prove that the proposed method generates a convergent subsequence under general parameter constraints, and that any limit point is a stationary point of the problem. Furthermore, we prove that if the objective function satisﬁes the KL property, the algorithm is globally convergent to a stationary point of the considered problem. Finally, we perform numerical experiments on a practical matrix completion problem with both synthetic and real data, the results of which demonstrate the eﬃciency and superior performance of the proposed algorithm.

**[03085] Gradient methods using Householder transformation with application to hypergraph partitioning****Author(s)**:**Xin Zhang**( Suqian University)- Jingya Chang (Guangdong University of Technology)
- Zhili Ge (, Nanjing Normal University of Special Education)
- Zhou Sheng (Anhui University of Technology)

**Abstract**: In this paper, we propose a constraint preserving algorithm for the smallest Z-eigenpair of the compact Laplacian tensor of an even-uniform hypergraph, where Householder transform is employed and a family of modified conjugate directions with sufficient descent is determined. Besides, we prove that there exists a positive step size in the new constraint preserving update scheme such that the Wolfe conditions hold. Based on these properties, we prove the convergence of the new algorithm. Furthermore, we apply our algorithm to the hypergraph partitioning and image segmentation, numerical results are reported to illustrate the efficiency of the proposed algorithm.

**[03102] Solving saddle point problems: a landscape of primal-dual algorithm with larger stepsizes****Author(s)**:**Fan Jiang**(Nanjing University of Information Science and Technology)- Hongjin He (Ningbo University)
- Zhiyuan Zhang (Xiamen University)

**Abstract**: We consider a class of saddle point problems frequently arising in the areas of image processing and machine learning. In this paper, we propose a simple primal-dual algorithm, which embeds a general proximal term induced with a positive definite matrix into one subproblem. It is remarkable that our algorithm enjoys larger stepsizes than many existing state-of-the-art primal-dual-like algorithms due to our relaxed convergence-guaranteeing condition. Moreover, our algorithm includes the well-known primal-dual hybrid gradient method as its special case, while it is also of possible benefit to deriving partially linearized primal-dual algorithms. Finally, we show that our algorithm is able to deal with multi-block separable saddle point problems. In particular, an application to a multi-block separable minimization problem with linear constraints yields a parallel algorithm. Some computational results sufficiently support the promising improvement brought by our relaxed requirement.

**[03200] Inexact variable metric proximal incremental aggregated gradient algorithm for nonconvex nonsmooth optimization problem****Author(s)**:**zehui Jia**(Nanjing University of Information Science and Technology)- junru Hou (Nanjing University of Information Science and Technology)
- xingju Cai (Nanjing Normal University)

**Abstract**: This paper focuses on the problem that minimizing the sum of a nonconvex smooth function and a nonsmooth convex function, in which the smooth term is in the form of finite sum. In order to solve the problem efficiently, we introduce the idea of incremental aggregation and two different inexact criterions to the variable metric proximal gradient (VMPG) algorithms, and then propose the inexact variable metric proximal incremental aggregated gradient (iVMPIAG) algorithms, i.e., iVMPIAG-I, iVMPIAG-II. Under the Kurdyka-Łojasiewicz (KL) property, we show the global convergence of iVMPIAG-I and iVMPIAG -II. When the Łojasiewicz exponent is known, we can prove the convergence rate of iVMPIAG-I with respect to the objective function value and the convergence rate of iVMPIAG-II with respect to the iterative sequence. Note that, for the convergence analysis of iVMPIAG-I, a critical tool is introduced, i.e., the incremental aggregated forward-backward (FB) envelope, which is a continuously differential function and can cover the FB envelope as a special case. Based on this tool, we define a continuously differentiable surrogate function, which equals to the value of the objective function at the stationary point. Finally, we present the efficiency of the iVMPIAG method for large-scaled image restoration problem.

**[03253] A Restricted Dual PRSM for a Strengthened DNN Relaxation for QAP****Author(s)**:- Naomi Graham (University of British Columbia)
- Hao Hu (Clemson University)
- Jiyoung Im (University of Waterloo)
**Xinxin Li**(Jilin University)- Henry Wolkowicz (University of Waterloo)

**Abstract**: Splitting methods in optimization arise when one can divide an optimization problem into two or more simpler subproblems. They have proven particularly successful for relaxations of problems involving discrete variables. We revisit and strengthen splitting methods for solving doubly nonnegative, DNN, relaxations of the particularly difficult, NP-hard quadratic assignment problem, QAP. We use a modified restricted contractive splitting method, rPRSM, approach. In particular, we show how to exploit redundant constraints in the subproblems. Our strengthened bounds exploit these new subproblems, as well as new dual multiplier estimates, to improve on the bounds and convergence results in the literature.