# Registered Data

## [00289] Nonconvex nonlinear programming: Theory and algorithms

**Session Date & Time**:- 00289 (1/2) : 1C (Aug.21, 13:20-15:00)
- 00289 (2/2) : 1D (Aug.21, 15:30-17:10)

**Type**: Proposal of Minisymposium**Abstract**: Nonconvex nonlinear programming problems arise extensively in many important applications including machine learning, image processing, etc. This minisymposium intends to present the latest advances on nonconvex nonlinear programming both in theory and in algorithms. The talks will be particularly focused on large scale problems, Lagrangian methods, gradient methods, and stochastic methods.**Organizer(s)**: Xin-Wei Liu, Yakui Huang**Classification**:__90C06__,__90C30__,__65K05__**Speakers Info**:- Kuang Bai (The Hong Kong Polytechnic University)
- Mengwei Xu (Hebei University of Technology)
- Bo Wen (Hebei University of Technology)
- Jun Fan (Hebei University of Technology)
**Yakui Huang**(Hebei University of Technology)- Xue Gao (Hebei University of Technology)
- Caixia Kou (Beijing University of Posts and Telecommunications)
- Xin-Wei Liu (Hebei University of Technology)

**Talks in Minisymposium**:**[01352] Golden ratio Bregman proximal gradient algorithm for nonconvex optimization problems****Author(s)**:**Xue Gao**(Hebei University of Technology)- Kai Wang (Nanjing University of Science and Technology)

**Abstract**: We focus on solving the nonconvex nonsmooth minimization problem over abstract constraint set, whose objective function is the sum of a proper lower semicontinuous convex function and a smooth nonconvex function, where the differentiable part is freed from the restrictive assumption of global Lipschitz gradient continuity. We design, analyze and test a golden ratio Bregman proximal gradient algorithm (GBPG). The globally convergence of GBPG is proved and numerical simulations demonstrate its feasibility and effectiveness.

**[01520] Sensitivity analysis for value functions with application to bilevel programs****Author(s)**:**Kuang Bai**(The Hong Kong Polytechnic University)

**Abstract**: In this talk, we will study sensitivity analysis of value functions and optimality conditions of bilevel programs. First, for the sensitivity analysis, based on a recent work of the speaker on parametric nonlinear programs, we will further study the directional sensitivity analysis of value functions for parametric set-constrained problems, which include many classical problems as special cases and can be nonsmooth and nonconvex. In particular, we will derive sufficient conditions for the directional Lipschitz continuity, formulae of the directional derivative and upper estimates for the directional limiting\Clarke subdifferential of value functions. Finally, based on the recent development on directional constraint qualifications and directional optimality conditions, using the directional differential properties of value functions, we will derive sharp optimality conditions for general bilevel programs.

**[02076] On the quadratic termination property of the gradient method****Author(s)**:**Yakui Huang**(Hebei University of Technology)

**Abstract**: The gradient method is one of the most popular algorithms in solving large scale unconstrained optimization problems. However, most of existing gradient methods do not enjoy the quadratic termination property. In this talk, we will provide a summary account of recent and new results on how to equip gradient methods with the two-dimensional quadratic termination property. Moreover, a new mechanism for the gradient method to achieve three- and higer- dimensional quadratic termination will be presented.

**[02633] A Stochastic Conjugate Gradient Algorithm with Variance Reduction****Author(s)**:**Caixia Kou**( Beijing University of Posts and Telecommunications)

**Abstract**: Stochastic gradient descent methods are popular for large scale optimization but has slow convergence asymptotically due to the inherent variance. To remedy this problem, we firstly propose a new stochastic conjugate gradient algorithm, called SCGA. Besides, developing a new stochastic gradient estimate of unbiasedness with minimized variance, we also present another two stochastic conjugate gradient algorithms. The convergence theory can be established and experiments show the new algorithms have satisfactory numerical performance.

**[03077] Relaxed constant positive linear dependence constraint qualification for disjunctive programs****Author(s)**:**Mengwei Xu**(Hebei University of Technology)- Jane Ye (University of Victoria)

**Abstract**: The disjunctive program is a class of optimization problems in which the constraint involves a disjunctive set which is the union of finitely many polyhedral convex sets. In this paper, we introduce a notion of the relaxed constant positive linear dependence constraint qualification (RCPLD) for the disjunctive program. Our notion is weaker than the one we introduced for a nonsmooth system which includes an abstract set constraint recently (J. Glob. Optim. 2020) and is still a constraint qualification for a Mordukhovich stationarity condition for the disjunctive program. To obtain the error bound property for the disjunctive program, we introduce the piecewise RCPLD under which the error bound property holds if all inequality constraint functions are subdifferentially regular and the rest of the constraint functions are smooth. We then specialize our results to the ortho-disjunctive program, which includes the mathematical program with equilibrium constraints (MPEC), the mathematical program with vanishing constraints (MPVC) and the mathematical program with switching constraints (MPSC) as special cases. For MPEC, we recover MPEC-RCPLD, a MPEC variant of RCPLD and propose the MPEC piecewise RCPLD to obtain the error bound property. For MPVC, we introduce MPVC-RCPLD as a constraint qualification and the piecewise RCPLD as a sufficient condition for the error bound property. For MPSC, we show that both RCPLD and the piecewise RCPLD coincide and hence it is not only a constraint qualification, but also a sufficient condition for the error bound property.

**[03083] A novel augmented Lagrangian and its application in linear programming****Author(s)**:**Xinwei Liu**(Hebei University of Technology)

**Abstract**: We introduce a twice differentiable augmented Lagrangian for optimization with general inequality constraints. Our function is a combination of the augmented Lagrangian and the logarithmic-barrier technique, and is a generalization of the Hestenes-Powell augmented Lagrangian. The associated augmented Lagrangian method is proved to have strong global convergence, the capability of rapidly detecting the possible infeasibility, and linear convergence to the KKT point. The preliminary numerical experiments on some small benchmark test problems demonstrate our theoretical results. The application in linear programming shows its superiority.

**[03103] An Oracle Gradient Regularized Newton Method for Quadratic Measurements Regression****Author(s)**:**Jun Fan**(Hebei University of Technology)

**Abstract**: Recently, recovering an unknown signal from quadratic measurements has gained popularity because it includes as special cases many interesting applications such as phase retrieval, fusion frame phase retrieval and positive operator-valued measure. In this paper, by employing the least squares approach to reconstruct the signal, we establish the non-asymptotic statistical property showing that the gap between the estimator and the true signal is vanished in the noiseless case and is bounded in the noisy case by an error rate of $O(\sqrt{p\log(1+2n)/n})$, where $n$ and $p$ are the number of measurements and the dimension of the signal, respectively. We develop a gradient regularized Newton method (GRNM) to solve the least squares problem and prove that it converges to a unique local minimum at a superlinear rate under certain mild conditions. In addition to the deterministic results, GRNM can reconstruct the true signal exactly for the noiseless case and achieve the above error rate with a high probability for the noisy case. Numerical experiments demonstrate the GRNM performs nicely in terms of high order of recovery accuracy, faster computational speed, and strong recovery capability.

**[03119] Extrapolated Bregman proximal difference-of-convex(DC) algorithm for structured DC optimization problems****Author(s)**:**Bo Wen**(Hebei University of Technology)

**Abstract**: In this talk, we mainly consider a Bregman proximal DC method with extrapolation for solving structured DC optimization problems. We first show different extrapolation strategies to possibly accelerate Bregman proximal DC algorithm, and then we discuss the convergence behavior and convergence rates of the Bregman extrapolated proximal DC algorithms. Finally, some numerical experiments have been conducted to illustrate the theoretical results.