Registered Data

[00455] Recent Development of Theory and Algorithms of Scientific Machine Learning

  • Session Date & Time :
    • 00455 (1/3) : 1E (Aug.21, 17:40-19:20)
    • 00455 (2/3) : 2C (Aug.22, 13:20-15:00)
    • 00455 (3/3) : 2D (Aug.22, 15:30-17:10)
  • Type : Proposal of Minisymposium
  • Abstract : The “unreasonable effectiveness” of deep learning for massive datasets posed numerous mathematical and algorithmic challenges along the path towards gaining deeper understandings of new phenomena in machine learning. This minisymposium aims at bringing together applied mathematicians interested in the mathematical aspects of deep learning, with diverse background and expertise to modeling high-dimensional scientific computing problems and nonlinear physical systems; the talks reflect the collaborative, multifaceted nature of the mathematical theory and applications of deep neural networks.
  • Organizer(s) : Chunmei Wang, Haizhao Yang
  • Classification : 68Q32, 68T20, 65N21, Machine Learning, Scientific Computing
  • Speakers Info :
    • Jinchao Xu (KAUST)
    • Chao Ma (Stanford University)
    • Zhengyu Huang (Caltech)
    • Senwei Liang ( Lawrence Berkeley National Laboratory)
    • Yong Zheng Ong (National University of Singapore)
    • Arnulf Jentzen (University of Münster)
    • Lu Zhang (COLUMBIA UNIVERSITY)
    • Yiqi Gu (Hongkong University)
    • Tao Luo (Shanghai Jiao Tong University)
    • Wei Cai (Southern Methodist University)
    • Qianxiao Li (National University of Singapore)
    • chunmei wang (University of Florida)
  • Talks in Minisymposium :
    • [01329] Deep adaptive basis Galerkin method for evolution equations
      • Author(s) :
        • Yiqi Gu (University of Electronic Science and Technology of China)
        • Michael K. NG (The University of Hong Kong)
      • Abstract : We study deep neural networks (DNNs) for solving high-dimensional evolution equations. Unlike other existing methods (e.g., the least square method) that simultaneously deal with time and space variables, we propose a deep adaptive basis approximation structure. On the one hand, orthogonal polynomials are employed to form the temporal basis to achieve high accuracy in time. On the other hand, DNNs are employed to form the adaptive spatial basis for high dimensions in space.
    • [01635] Identifying reaction channels via reinforcement learning
      • Author(s) :
        • Senwei Liang (Lawrence Berkeley Laboratory)
      • Abstract : Reactive trajectories between metastable states are rare yet important in studying reactions. This talk introduces a new method to identify the reaction channels where reactive trajectories occur frequently via reinforcement learning (RL). The action function in RL learns to seek the connective configurations based on reward from simulation. We characterize the reactive channels by data points sampled by shooting from the located connective configurations. These data points bridge stable states and cover most transition regions of interest, enabling us to study reaction mechanism on narrowed regions rather than entire configuration space.
    • [03154] Finite Expression Method: A Symbolic Approach for Scientific Machine Learning
      • Author(s) :
        • Haizhao Yang (University of Maryland College Park)
      • Abstract : Machine learning has revolutionized computational science and engineering with impressive breakthroughs, e.g., making the efficient solution of high-dimensional computational tasks feasible and advancing domain knowledge via scientific data mining. This leads to an emerging field called scientific machine learning. In this talk, we introduce a new method for a symbolic approach to solving scientific machine learning problems. This method seeks interpretable learning outcomes in the space of functions with finitely many analytic expressions and, hence, this methodology is named the finite expression method (FEX). It is proved in approximation theory that FEX can avoid the curse of dimensionality in discovering high-dimensional complex systems. As a proof of concept, a deep reinforcement learning method is proposed to implement FEX for learning the solution of high-dimensional PDEs and learning the governing equations of raw data.
    • [03159] Deep Adaptive Basis Galerkin Method for Evolution Equations
      • Author(s) :
        • Yiqi Gu (University of Electronic Science and Technology of China)
      • Abstract : We study deep neural networks (DNNs) for solving high-dimensional evolution equations. Unlike other existing methods (e.g., the least square method) that simultaneously deal with time and space variables, we propose a deep adaptive basis approximation structure. On the one hand, orthogonal polynomials are employed to form the temporal basis to achieve high accuracy in time. On the other hand, DNNs are employed to form the adaptive spatial basis for high dimensions in space.
    • [03340] Finite Expression Methods for Discovering Pyhsical Laws from Data
      • Author(s) :
        • chunmei wang (University of Florida)
      • Abstract : The speaker will present the finite expression method (FEX) for discovering the governing equations of data. By design, FEX can provide physically meaningful and interpretable formulas for physical laws compared to black-box deep learning methods. FEX only requires a small number of predefined operators to automatically generate a large class of mathematical formulas. Therefore, compared to existing symbolic approaches, FEX enjoys favorable memory cost and can discover a larger range of governing equations while other methods fail, as shown by extensive numerical tests.
    • [03345] Approximation Theory for Sequence Modelling
      • Author(s) :
        • Qianxiao Li (National University of Singapore)
      • Abstract : In this talk, we present some recent results on the approximation theory of deep learning architectures for sequence modelling. In particular, we formulate a basic mathematical framework, under which different popular architectures such as recurrent neural networks, dilated convolutional networks (e.g. WaveNet), encoder-decoder structures, and transformers can be rigorously compared. These analyses reveal some interesting connections between approximation, memory, sparsity and low rank phenomena that may guide the practical selection and design of these network architectures.
    • [03467] Multi-scale Neural Networks for High Frequency Problems in Regressions and PDEs
      • Author(s) :
        • Wei Cai (Southern Methodist University)
        • Lizuo Liu (Southern Methodist University)
        • Bo Wang (LCSM(MOE), School of Mathematics and Statistics, Hunan Normal University, Changsha, Hunan, 410081, P. R. China.)
      • Abstract : In this talk, we will introduce multiscale deep neural networks (MscaleDNNs) in order to overcome the spectral bias of deep neural networks when approximating functions with wide-band frequency information. The MscaleDNN uses a radial scaling in the frequency domain, which converts the problem of learning high frequency contents in regression problems or PDE’s solutions to one of learning lower frequency functions. As a result, the MscaleDNN achieves fast uniform convergence over multiple scales as demonstrated in solving regression problems and highly oscillatory Navier-Stokes flows. Moreover, a diffusion equation model in the frequency domain is obtained based on the neural tangent kernel, which clearly shows how the multiple scales in the MscaleDNN improves the convergence of the training of neural networks over wider frequency ranges with more scales, compared with a traditional fully connected neural network.
    • [05449] Implicit bias in deep learning based PDE solvers
      • Author(s) :
        • Tao Luo (Shanghai Jiao Tong University)
        • Qixuan Zhou (Shanghai Jiao Tong University)
      • Abstract : We will discuss some recent development on the theory of deep learning based PDE solvers. We would like to mention some new ideas on modeling and analysis of such algorithms, especially some related phenomenon observed during the training process. For the theorectical part, both optimization and approximation will be considered.