# Registered Data

## [01072] Data-Driven Methods in Scientific Machine Learning

**Session Time & Room**:**Type**: Proposal of Minisymposium**Abstract**: The ample availability of data for scientific problems, in addition to developments in hardware and software for machine and deep learning have changed the way mathematicians approach problems, particularly those in numerical analysis and scientific computing. Rather than relying strictly on the physics of the problem at hand for modeling and computing, data-driven methods incorporate observational data to inform their solutions. This session focuses on significant advances in data-driven methods and machine learning for a variety of problems in scientific computing, including but not limited to: function approximation, inverse problems, dynamical systems, dimensionality reduction, and generally scientific machine learning.**Organizer(s)**: Victor Churchill, Dongbin Xiu**Classification**:__65Z05__,__62R07__,__68T07__,__68T09__,__Scientific Machine Learning__**Minisymposium Program**:- 01072 (1/2) :
__4D__@__E803__[Chair: Victor Churchill] **[05116] Acceleration of multiscale solvers via adjoint operator learning****Format**: Talk at Waseda University**Author(s)**:**Emanuel Eld Ström**(KTH Royal Institute of Technology)- Ozan Öktem (KTH Royal Institute of Technology)
- Anna-Karin Tornberg (KTH Royal Institute of Technology)

**Abstract**: We leverage recent advances in operator learning to accelerate multiscale solvers for laminar fluid flow over a rough boundary. We focus on the HMM method, which involves formulating the problem through a coupled system of microscopic and macroscopic subproblems. Solving microscopic problems can be viewed as a nonlinear operator mapping from the space of micro domains to the solution space. Our main contribution is to use an FNO-type architecture to perform this mapping.

**[05635] A Stochastic MaxiIn this work, we introduce a stochastic maximum principle (SMP) approach for solving the reinforcement learning problem with the assumption that the unknmum Principle Approach for Reinforcement Learning with Parameterized Environment****Format**: Talk at Waseda University**Author(s)**:**Feng Bao**(Florida State University)- Richard Archibald (Oak Ridge National Lab)
- Jiongmin Yong (University of Central Florida)

**Abstract**: In this work, we introduce a stochastic maximum principle (SMP) approach for solving the reinforcement learning problem with the assumption that the unknowns in the environment can be parameterized based on physics knowledge. For the development of numerical algorithms, we apply an effective online parameter estimation method as our exploration technique to estimate the environment parameter during the training procedure, and the exploitation for the optimal policy is achieved by an efficient backward action learning method for policy improvement under the SMP framework. Numerical experiments are presented to demonstrate that the SMP approach for reinforcement learning can produce reliable control policy, and the gradient descent type optimization in the SMP solver requires less training episodes compared with the standard dynamic programming principle based methods.

**[05649] A pseudo-reversible normalizing flow for stochastic dynamical systems with various initial distributions****Format**: Talk at Waseda University**Author(s)**:**Guannan Zhang**(Oak Ridge National Laboratory)

**Abstract**: We present a pseudo-reversible normalizing flow method for efficiently generating samples of the state of a stochastic differential equation (SDE) with various initial distributions. The primary objective is to construct an accurate and efficient sampler that can be used as a surrogate model for computationally expensive numerical integration of SDE, such as those employed in particle simulation. After training, the normalizing flow model can directly generate samples of the SDE's final state without simulating trajectories. Existing normalizing flow model for SDEs depend on the initial distribution, meaning the model needs to be re-trained when the initial distribution changes. The main novelty of our normalizing flow model is that it can learn the conditional distribution of the state, i.e., the distribution of the final state conditional on any initial state, such that the model only needs to be trained once and the trained model can be used to handle various initial distributions. This feature can provide a significant computational saving in studies of how the final state varies with the initial distribution. Additionally, we propose to use a pseudo-reversible network architecture to define the normalizing flow model, which has sufficient expressive power and training efficiency for a variety of SDEs in science and engineering, e.g., in particle physics. We provide a rigorous convergence analysis of the pseudo-reversible normalizing flow model to the target probability density function in the Kullback–Leibler divergence metric. Numerical experiments are provided to demonstrate the effectiveness of the proposed normalizing flow model.

- 01072 (2/2) :
__4E__@__E803__[Chair: Victor Churchill] **[05668] An Exponential Speedup in the Rigorous Operator Learning of Elliptic PDEs****Format**: Talk at Waseda University**Author(s)**:**Florian Schaefer**(Georgia Institute of Technology)- Houman Owhadi (California Institute of Technology)

**Abstract**: The so-called "operator learning" of solution operators of partial differential equations (PDEs) from solution pairs has attracted considerable attention. Prior to our work, methods for learning elliptic PDE with rigorous convergence rates required $\mathrm{poly}(1/\epsilon)$ solution pairs to achieve an $\epsilon$-accurate approximation of the solution operator. In the present work, we achieve an exponential improvement by proposing an algorithm that can recover the discretized solution operators of general elliptic PDEs on a $d$-dimensional domain to accuracy $\epsilon$ from only $\mathcal{O}\left(\log\left(N\right) \log^{d}\left(N/\epsilon\right)\right)$ solution pairs selected a-priori. Here, $N$ is the number of degrees of freedom of the discrete function space. By polynomial approximation, we can also approximate the continuous Green's function (in operator and Hilbert-Schmidt norm) to accuracy $\epsilon$ from $\mathcal{O}\left(\log^{1 + d}\left(\epsilon^{-1}\right)\right)$ solutions of the PDE. Our method has computational cost $\mathcal{O}\left(N \log^{2}\left(N\right) \log^{2d}\left(N/\epsilon\right)\right)$ and returns a sparse Cholesky factor with $\mathcal{O}\left(N \log\left(N\right) \log^{d}\left(N/\epsilon\right)\right)$ nonzero entries. This Cholesky factor can be interpreted as a transport map that maps a standard Gaussian vector to the Gaussian process associated with the PDE. Prior work on the conditional independence properties of these Gaussian processes allows us to prove the error-vs-complexity bounds mentioned above. We provide numerical experiments that show the practical applicability of the proposed method, including experiments on fractional-order elliptic PDEs. Finally, we present recent applications of the proposed work to closure modeling in turbulent flows.

**[05663] Deep Operator Learning Lessens the Curse of Dimensionality for PDEs****Format**: Talk at Waseda University**Author(s)**:**Haizhao Yang**(University of Maryland College Park)- Chunmei Wang (University of Florida)
- Ke Chen (University of Maryland College Park)

**Abstract**: Deep neural networks (DNNs) have achieved remarkable success in numerous domains, and their application to PDE-related problems has been rapidly advancing. This paper provides an estimate for the generalization error of learning Lipschitz operators over Banach spaces using DNNs with applications to various PDE solution operators. The goal is to specify DNN width, depth, and the number of training samples needed to guarantee a certain testing error. Under mild assumptions on data distributions or operator structures, our analysis shows that deep operator learning can have a relaxed dependence on the discretization resolution of PDEs and, hence, lessen the curse of dimensionality of solution operators in many PDE-related problems including elliptic equations, parabolic equations, and Burgers equations. Our results are also applied to give insights about discretization-invariant in operator learning.

**[05632] Flow Map Learning for Unknown Dynamical Systems: Overview, Implementation, and Benchmarks****Format**: Talk at Waseda University**Author(s)**:**Victor Churchill**(Trinity College)- Dongbin Xiu (The Ohio State University)

**Abstract**: Flow map learning has shown promise for data-driven modeling of unknown dynamical systems. A remarkable feature is the capability of producing accurate predictive models for partially observed systems, even when their exact mathematical models do not exist. We present an overview of the framework, as well as the important computational details for its successful implementation. A set of well defined benchmark problems are presented in full numerical detail to ensure accessibility for cross-examination and reproducibility.

- 01072 (1/2) :