Abstract : Although the field of randomized numerical linear algebra has grown significantly, developments on accurate randomized solvers only start to emerge in recent years. This minisymposium intends to bring together researchers to exchange ideas on producing fast and accurate randomized solvers, studying their performance, and exploring new applications. We will specifically focus on randomized methods for solving linear systems and eigenvalue problems and on randomized strategies that can produce reliable high-quality solutions or approximations. Some topics include randomized iterative solvers, preconditioning, matrix approximations, low-rank compression, and eigenvalue detection. Applications to PDE solutions, machine learning, and data analysis will also be discussed.
[03670] Making the Nystrom method highly accurate for low-rank approximations
Format : Online Talk on Zoom
Author(s) :
Jianlin Xia (Purdue University)
Abstract : The Nystrom method is a convenient method to quickly obtain a low-rank approximation to a kernel matrix with low or modest accuracies. In this work, we propose a type of Nystrom methods that can reach high accuracies. The methods (called high-accuracy Nystrom methods) treat the Nystrom method and a skinny rank-revealing factorization as a fast pivoting strategy in a progressive alternating direction refinement process. A rank expansion strategy based on fast subset updates is further proposed and can quickly advance the sizes of the basis matrices. A fast randomized accuracy control strategy is also given. Different versions of high-accuracy Nystrom methods are derived and can produce low-rank approximations with prespecified accuracies, sometimes with near SVD quality.
[04016] Superfast iterative refinement of Low Rank Approximation of a Matrix
Format : Online Talk on Zoom
Author(s) :
Victor Pan (CUNY)
Abstract : Every superfast (aka sublinear cost) Low Rank Matrix Approximation (LRA) algorithm --
involving much fewer flops and memory cells than matrix has
entries -- cannot work on ANY input, failing even with randomization,
but our LRA algorithms are efficient or nearly optimal for MANY
(large class of) inputs. Moreover, we propose, analyze, and test
novel superfast algorithms for iterative refinement of any crude but
sufficiently close LRA, according to both formal study and numerical tests.
[00881] Relaxation in low-rank updates of Schur complement preconditioners in fluid flow problems
Format : Talk at Waseda University
Author(s) :
Sabine Le Borne (Hamburg University of TechnologyHamburg University of Technology)
Abstract : In the simulation of fluid dynamic problems we often have to solve large-scale
saddle-point systems. Low-rank updates can adapt standard preconditioners
and accelerate convergence. We consider a low-rank correction for pressure
Schur complement preconditioners and introduce a relaxation of the initial
preconditioner which can improve the update scheme. Numerical results for the
linearized Navier-Stokes equations illustrate the action of the update scheme.
[00530] Randomized low-rank approximations beyond Gaussian random matrices
Format : Online Talk on Zoom
Author(s) :
Arvind Krishna Saibaba (North Carolina State University)
Agnieszka Miedlar (Virginia Tech)
Abstract : Randomized algorithms have been a revolutionary force in the field of low-rank approximations. A key step in these randomized algorithms is the randomized range finder which involves products with random matrices. The prevalent approach is to take the random matrix to be standard Gaussian which has favorable theoretical properties and is easy to implement in practice. Although several non-Gaussian random matrices have been used and analyzed, there are many open questions on what classes of random matrices are suitable for the randomized range finder. We analyze three different classes of random matrices: independent subgaussian entries, independent subgaussian columns, and independent bounded columns. These bounds provide a unified approach to studying various classes of random matrices and are supported by numerical experiments on test and real-world matrices.
[04523] Are randomized NLA algorithms numerically stable?
Format : Talk at Waseda University
Author(s) :
Yuji Nakatsukasa (University of Oxford)
Joel A. Tropp (Caltech)
Abstract : We develop algorithms for linear systems and eigenvalue problems that apply fast randomized sketching to accelerate standard subspace projection methods, such as GMRES and Rayleigh-Ritz. This modification allows for incorporating nontraditional bases for the approximation subspace. When the basis is numerically full rank, these algorithms have accuracy similar to classic methods but run faster. We illustrate a 70x speedup over gmres. Time (and progress) permitting, I will discuss recent developments in related topics.
[05547] Randomized orthogonalization process
Format : Online Talk on Zoom
Author(s) :
Laura Grigori (EPFL and PSI)
Abstract : In this talk we will review recent progress on deriving algorithms for orthogonalizing a set of vectors. We will discuss then how this algorithms could be used to solve linear systems of equations and eigenvalue problems. We will conclude with numerical experiments that show the numerical stability of the proposed algorithms.
[04952] A robust randomized indicator method for accurate symmetric eigenvalue detection
Format : Online Talk on Zoom
Author(s) :
Zhongyuan Chen (Medical College of Wisconsin)
Jiguang Sun (Michigan Technological University)
Jianlin Xia (Purdue University)
Abstract : We propose a robust randomized indicator method for accurate eigenvalue detection for symmetric matrices $A$, which gives a novel way to use randomization to design eigensolvers for finding interior eigenvalues. An indicator detects the existence of eigenvalues inside an interval based on some statistical norm estimators for a spectral projector. Previous work on eigenvalue indicators relies on a threshold which is only heuristically chosen, thus often resulting in spurious or missed eigenvalues. In this work, we use rigorous statistical analysis to guide the design of a robust indicator. Multiple randomized estimators for a contour integral operator in terms of $A$ are analyzed. In particular, when $A$ has eigenvalues inside a given interval, we show that the failure probability (for the estimators to return very small estimates) is extremely low. This enables to design a robust rejection indicator based on the control of the failure probability. We then illustrate how the indicator method may potentially be used to develop new randomized symmetric eigensolvers, where fast indicator evaluation via shifted linear system solution is employed in a bisection scheme. Unlike previous indicator methods that only produce eigenvalues, our method can conveniently reuse computations from indicator evaluations to find eigenvectors with little extra cost.
[05321] The Adversarially Robust Generalized Eigenvalue Problem
Format : Online Talk on Zoom
Author(s) :
Ming Gu (UC Berkeley)
Jiaming Wang (UC Berkeley)
Abstract : In this talk, we will introduce novel algorithms for solving the adversarially robust generalized eigenvalue problem, a highly non-convex optimization problem that has long eluded traditional optimization solvers. We will first discuss the rank-one case and then move on to the general-rank case. We will also show the potential applications of our algorithms in robust adaptive beamforming.
[03799] Stochastic Gradient Descent with Conjugate Gradient-style Momentum
Format : Online Talk on Zoom
Author(s) :
Bao Wang (University of Utah)
Qiang Ye (University of Kentucky)
Abstract : Momentum may be crucial in stochastic gradient-based optimization algorithms for convergence acceleration. The classical conjugate gradient algorithm may be considered as gradient descent with momentum. In this talk, we introduce a stochastic gradient descent algorithm with a conjugate gradient-style momentum. We will discuss its convergence properties and present some numerical examples to demonstrate its effectiveness.
[01342] Robust randomized preconditioning for kernel ridge regression
Format : Talk at Waseda University
Author(s) :
Mateo Diaz Diaz (Johns Hopkins University)
Ethan Epperly (Caltech)
Zachary Frangella (Stanford)
Joel Tropp (Caltech)
Robert Webber (California Institute of Technology)
Abstract : We advocate two randomized preconditioning approaches for applying kernel ridge regression (KRR) to a moderate or large number of data points ($N \geq 10^4$). RPCholesky preconditioning is guaranteed to solve the exact KRR equations involving the $N \times N$ kernel matrix in just $\mathcal{O}(N^2)$ operations, assuming eigenvalue decay. KRILL preconditioning is guaranteed to solve the restricted KRR equations involving a $N \times k$ kernel submatrix in just $\mathcal{O}((N + k^2) k \log k)$ operations, with no assumptions on the kernel matrix or the regularization parameter. Experiments with dozens of data sets validate the effectiveness of RPCholesky and KRILL. Additionally, our theoretical analysis shows that RPCholesky and KRILL have stronger robustness properties compared to other commonly used preconditioners.
[01305] Structured matrix recovery using randomized matrix-vector products
Format : Talk at Waseda University
Author(s) :
Diana Halikias (Cornell University)
Alex Townsend (Cornell University)
Abstract : Can one recover a matrix efficiently from only matrix-vector products? If so, how many are needed? We describe randomized algorithms of this type for various structured matrices. In particular, we recover an $N\times N$ hierarchical matrix with rank-$k$ off-diagonal blocks using $\mathcal O(k\log(N))$ matrix-vector products. While existing algorithms employ a recursive “peeling" procedure of elimination, we use recursive projection, which may be preferable when matrix-vector products are noisy, or for recovering the nearest hierarchical matrix.