# Registered Data

## [01445] Deep Learning, Preconditioning, and Linear Solvers

**Session Date & Time**:- 01445 (1/2) : 5C (Aug.25, 13:20-15:00)
- 01445 (2/2) : 5D (Aug.25, 15:30-17:10)

**Type**: Proposal of Minisymposium**Abstract**: The numerical solution of linear systems of equations is the computational bottleneck in a whole spectrum of applied mathematics and computational science problems. Recently, a number of works have investigated how deep learning can accelerate this critical solution process. This minisymposium will showcase cutting-edge innovations in using deep learning techniques to design and accelerate preconditioners and solvers for linear systems. Researchers will share recent work on topics like combining neural networks with multigrid solvers or conjugate direction methods. An emphasized application will be large, sparse linear systems that arise from discretized partial differential equations in computational physics and simulation problems.**Organizer(s)**: David Hyde**Classification**:__65F08__,__65F10__,__68T07__,__65N22__**Speakers Info**:**David Hyde**(Vanderbilt University)- Ayano Kaneda (Waseda University)
- Pouria Mistani (NVIDIA)
- Kai Jiang (Xiangtan University)
- Rachel Yovel (Ben-Gurion University of the Negev)
- Yihang Gao (The University of Hong Kong)

**Talks in Minisymposium**:**[03141] On learning neural operators of PDEs with interfacial jump conditions for accelerating simulations of physical systems****Author(s)**:**Pouria Akbari Mistani**(NVIDIA Corp)- Samira Pakravan (University of California Santa Barbara)
- Frederic Gibou (University of California Santa Barbara)

**Abstract**: Elliptic (free boundary) problems with jump conditions are commonly used to model multiscale physical systems. Despite the availability of optimal numerical solvers, obtaining solutions over large spatiotemporal scales remains challenging. Pre-trained compact neural operators offer fast inference oracles to accelerate simulations on modern hardware. In this talk we present our work on training accurate neural operators for this class of problems. We also introduce JAX-DIPS, a publicly available library, to promote research in this area.

**[03199] Accelerating multigrid solvers for the acoustic and elastic Helmholtz equation.****Author(s)**:**Rachel Yovel**(Ben-Gurion University of the Negev)- Eran Treister (Ben-Gurion University of the Negev)
- Bar Lerer (Ben-Gurion University of the Negev)

**Abstract**: We develop multigrid solvers for the acoustic and elastic Helmholtz equations and accelerate them using deep learning methods. Based on the shifted Laplacian approach, which is typically used for the acoustic version, we build a GPU-friendly geometric multigrid preconditioner for the elastic Helmholtz equation. Moreover, we present a block-acoustic preconditioner for the elastic version and utilize a trained CNN acoustic solver to solve the elastic Helmholtz equation through this reduction.

**[03251] Wasserstein GAN and Transfer Learning in physics-informed neural networks****Author(s)**:**Yihang Gao**(The University of Hong Kong)- Michael Kwok-Po Ng (The University of Hong Kong)

**Abstract**: We study a physics-informed algorithm for Wasserstein Generative Adversarial Networks (WGANs) for uncertainty quantification in solutions of PDEs. By using groupsort activation functions in adversarial network discriminators, network generators are utilized to learn the uncertainty in solutions of PDEs observed from initial/boundary data. Under mild assumptions, we show the convergence of the obtained model. Moreover, we also study a SVD-based transfer learning method which stabilize the training and reduce the storage for PINNs.

**[03314] Deep Learning, Preconditioning, and Linear Solvers****Author(s)**:**David Hyde**(Vanderbilt University)

**Abstract**: We will survey techniques for using learning to accelerate linear solvers and preconditioners. Some methods use learning to determine high-quality initial guesses for iterative systems; other approaches learn parameters for classical preconditioners like algebraic multigrid; further techniques replace the entire role of a preconditioner with learning; and still other works replace entire linear solvers with neural network evaluations. After surveying these approaches, we will suggest some avenues of research, open questions, and opportunities for collaboration.

**[03614] A Deep Conjugate Direction Method for Iteratively Solving Linear Systems****Author(s)**:**ayano kaneda**(waseda university)- David Hyde (Vanderbilt University)
- Osman Aker (University of California)
- Joseph Teran Michael ( University of California, Davis)

**Abstract**: We present a novel deep learning approach to approximate the solution of large, sparse, symmetric, positive-definite linear systems of equations. Motivated by the conjugate gradients algorithm that iteratively selects search directions for minimizing the matrix norm of the approximation error, we design an approach that utilizes a deep neural network to accelerate convergence via data-driven improvement of the search direction at each iteration. Our method leverages a carefully chosen convolutional network to approximate the action of the inverse of the linear operator up to an arbitrary constant. We demonstrate the efficacy of our approach on spatially discretized Poisson equations, which arise in computational fluid dynamics applications, with millions of degrees of freedom. Unlike state-of-the-art learning approaches, our algorithm is capable of reducing the linear system residual to a given tolerance in a small number of iterations, independent of the problem size. Moreover, our method generalizes effectively to various systems beyond those encountered during training.

**[04332] Fourier Neural Solver for Large Sparse Linear Algebraic Systems****Author(s)**:**Kai Jiang**(Xiangtan University)

**Abstract**: In this talk, we propose an interpretable neural solver, the Fourier neural solver (FNS), to solve sparse linear algebraic systems. Based on deep learning and fast Fourier transformation, FNS combines a stationary iterative method and frequency space correction approach to efficiently eliminate different frequency components of the error. The local Fourier analysis indicates that FNS can detect error components within the frequency domain that cannot be eliminated effectively using stationary methods, even though the error removed by the latter is problem-dependent. Numerical experiments on several classical equations show that the FNS is more efficient and more robust than the existing neural solvers. If time permits, we will update our latest progress in this area.