# Registered Data

## [02537] Structured Low-Rank Matrices and Their Applications

**Session Time & Room**:**Type**: Proposal of Minisymposium**Abstract**: Large dense matrices are ubiquitous in engineering and data science applications, e.g. preconditioners for iterative boundary integral solvers, frontal matrices in sparse multifrontal solvers, and computing the determinant of covariance matrices. Such dense matrices have a numerical low-rank structure, which can be exploited to reduce the complexity of matrix multiplication and factorization from cubic to (near-)linear. As mixed-precision and randomized linear algebra become commonplace, such approximations become increasingly important.**Organizer(s)**: Rio Yokota, Hatem Ltaief**Classification**:__65Rxx__,__15Axx__,__41Axx__,__65Yxx__,__65Nxx__**Minisymposium Program**:- 02537 (1/2) :
__2D__@__A601__[Chair: Rio Yokota] **[05519] Hierarchical Lowrank Arithmetic with Binary Compression****Author(s)**:**Ronald Kriemann**(Max Planck Institute for Math. i.t.S.)

**Abstract**: With lowrank approximation the storage requirement for dense data is reduced to linear levels. However, the lowrank factors are often stored using double precision. Newer approaches exploit the different IEEE754 floating point formats in a mixed precision approach. Since these formats show a significant storage (and accuracy) gap, we look beyond the standard formats and use an adaptive precision scheme to further increase the storage efficiency and investigate its effects on the arithmetic of H-matrices.

**[05545] Parallel Factorization of Hierarchical Matrices****Author(s)**:**Wagih Halim Boukaram**(Lawrence Berkeley National Lab)

**Abstract**: Hierarchical matrices allow for memory efficient representation of the data sparse matrices that often appear in scientific applications. The open source H2Opus library provides distributed CPU and GPU implementations of several key operations using the H2-variant of hierarchical matrices, where nested row and column bases allow for asymptotically optimal memory storage requirements. In this talk, we discuss the details of a newly developed parallel hierarchical factorization algorithm using skeletonization.

**[05552] Parallel Low-Rank Approximation of High-Dimensional Multivariate Normal Probabilities on Manycore Systems****Author(s)**:- Xiran Zhang (KAUST)
**Sameh Abdulah**(KAUST)- Hatem Ltaief (KAUST)
- Ying Sun (KAUST)
- Marc Genton (KAUST)
- David Keyes (KAUST)

**Abstract**: The multivariate normal (MVN) probabilities frequently appear in statistics to support applications requiring, for example, the computation of skewed probability density functions or Bayesian spatial probit problems. In the literature, the separation-of-variable (SOV) technique is commonly used to compute the MVN probability by converting the integration region to the unit hypercube, allowing a faster convergence rate. However, the SOV techniques require the computation of the Cholesky factorization of an n x n matrix with O(n^3) computation and O(n^2) space complexity. The computing of the Cholesky factorization operation in higher dimensions is prohibitive in dense structures. Thus, several studies have proposed to include an approximation technique that can help perform the Cholesky factorization faster while preserving the required accuracy. Another direction is to rely on high-performance computing techniques to allow intensive computing on modern parallel systems. In this work, we aim to couple the computing power and the hierarchical approximation to allow faster computation of the MVN probability of high-dimensional problems. We rely on state-of-the-art parallel hierarchical linear algebra algorithms and runtime systems to provide high performance and scalability in computing the MVN probability. We also include a block reordering technique to allow a faster convergence rate than the dense algorithm. Moreover, we assess the performance and the accuracy of the provided method using simulations and real air pollution data.

**[05590] A geometry oblivious H-matrix approximation scheme for rectangular matrices****Author(s)**:**George Biros**(The University of Texas at Austin)

**Abstract**: We propose a novel method for compressing dense matrices. Our method is based on a hierarchical-matrix (H-matrix) approximation. H-matrix approximations have been popular in science and engineering applications. They combine the notion of singular value decomposition (SVD) with appropriate block permutations and recursion. H-matrices are applicable to problems in which the matrix entries correspond to pairwise interactions between sets of points, as for example in kernel matrices. Here we generalize this approximation to arbitrary dense matrices. Our method comprises of a randomized low-rank approximation of permuted blocks along with approximate leverage scores computations that are used to find such permutations. We introduce theoretical analysis, complexity analysis, and experimental results on kernel matrices.

- 02537 (2/2) :
__2E__@__A601__[Chair: Hatem Ltaief] **[05551] Dynamic Rupture Simulation Using FDP Method Accelerated by Lattice H-matrices****Author(s)**:**Takumi Miyajima**(The University of Tokyo)- Akihiro Ida ( Japan Agency for Marine-Earth Science and Technology )
- Ryosuke Ando (The University of Tokyo)

**Abstract**: Dynamic rupture simulation with the spatiotemporal boundary integral equation method requires N × N dense matrices in a naive method. In order to reduce the memory consumption and computational cost, we propose a new approximation method called “FDP=LH matrices method” by incorporating travel time approximation into H matrices. In this minisymposium, we will talk about the algorithm and simulation results.

- 02537 (1/2) :