Abstract : Various difficult PDE problems from the science and engineering now tend to be solved by numerical methods based on deep learning. This minisymposium focuses on both analytic and numerical aspects of these new methods. The speakers will talk about their recent works on the mechanism and further improvement of variational or/and physics-informed DNN-based solvers with applications to scientific computing problems.
Abstract : Physics-informed neural networks (PINNs) have emerged as an effective technique for
solving PDEs in a wide range of domains. It is noticed, however, the performance of PINNs can vary
dramatically with different sampling procedures. For instance, a fixed set of (prior chosen) training
points may fail to capture the effective solution region (especially for problems with singularities). To
overcome this issue, we present in this work an adaptive strategy, termed the failure-informed PINNs
(FI-PINNs), which is inspired by the viewpoint of reliability analysis. The key idea is to define an
effective failure probability based on the residual, and then, with the aim of placing more samples
in the failure region, the FI-PINNs employs a failure-informed enrichment technique to adaptively
add new collocation points to the training set, such that the numerical accuracy is dramatically
improved. In short, similar as adaptive finite element methods, the proposed FI-PINNs adopts the
failure probability as the posterior error indicator to generate new training points. We prove rigorous
error bounds of FI-PINNs and illustrate its performance through several problems.
[03007] Deep Learning for PDEs: Domain Decomposition and Adaptivity
Author(s) :
Qifeng Liao (ShanghaiTech University)
Abstract : Deep learning methods currently gain a lot of interest for solving partial differential equations (PDEs). However, significant challenges still exist for these new methods to achieve high accuracy, which include properly defining loss functions and choosing effective collocation points and network structures. In our work, we propose domain decomposition and adaptive procedures to improve the accuracy and efficiency of deep learning based methods.
[03091] Bridging Traditional and Machine Learning-based Algorithms for Solving PDEs: The Random Feature Method
Author(s) :
JINGRUN CHEN (University of Science and Technology of China)
XURONG CHI (University ofUniversity of Science and Technology of China Science and Technology of China)
WEINAN E (AI for Science Institute, Beijing and Center for Machine Learning Research and School of Mathematical Sciences, Peking University)
ZHOUWANG YANG (School of Mathematical Sciences, University of Science and Technology of China)
Abstract : One of the oldest and most studied subject in scientific computing is algorithms for solving partial differential equations (PDEs). A long list of numerical methods have been proposed and successfully used for various applications. In recent years, deep learning methods have shown their superiority for highdimensional PDEs where traditional methods fail. However, for low dimensional problems, it remains unclear whether these methods have a real advantage over traditional algorithms as a direct solver. In this work, we propose the random feature method (RFM) for solving PDEs, a natural bridge between traditional and machine learning-based algorithms. RFM is based on a combination of well-known ideas: 1. representation of the approximate solution using random feature functions; 2. collocation method to take care of the PDE; 3. penalty method to treat the boundary conditions, which allows us to treat the boundary condition and the PDE in the same footing. We find it crucial to add several additional components including multi-scale representatio and adaptive weight rescaling in the loss function. We demonstrate that the method exhibits spectral accuracy and can compete with traditional solvers in terms of both accuracy and efficiency. In addition, we find that RFM is particularly suited for problems with complex geometry, where both traditional and machine learning-based algorithms encounter difficulties.
[03011] On deep learning techniques for solving convection-dominated convection-diffusion equations
Author(s) :
Derk Frerichs-Mihov (Weierstrass Institute for Applied Analysis and Stochastics)
Linus Henning (Free University of Berlin)
Derk Frerichs-Mihov (Weierstrass Institute for Applied Analysis and Stochastics / Free University of Berlin)
Abstract : Convection-diffusion equations model the distribution of a scalar quantity in fluids, e.g., the concentration of drugs in blood. Many classical numerical methods produce unphysical values in practical applications when convection is
much stronger than diffusion \([1,2]\).
In the last decades, the popularity of deep learning methods has risen sharply due to many success stories, e.g., \([3,4]\). This talk brings together deep learning techniques and convection-diffusion equations. It shows challenges and proposes solutions to overcome them.
\([1]\) Augustin, M., Caiazzo, A., John, V. et al. An assessment of discretizations for convection-dominated convection-diffusion equations. Computer Methods in Applied Mechanics and Engineering, 200(47- 48), pp. 3395–3409, 2011, https://www.doi.org/10.1016/j.cma.2011.08.012
\([2]\) Frerichs, D. and John, V. On reducing spurious oscillations in discontinuous Galerkin (DG) methods for steady-state convection-diffusion equations. Journal of Computational and Applied Mathematics, 393, pp. 113487/1–113487/20, 2021, https://www.doi.org/10.1016/j.cam.2021.113487
\([3]\) Raissi, M., Perdikaris, P. and Karniadakis, G. E. Physics Informed Deep Learning (Part I): Data-driven Solutions of Nonlinear Partial Differential Equations. ArXiv, arXiv:1711.10561v1, 2017, https://www.doi.org/10.48550/arXiv.1711.10561
\([4]\) Karniadakis, G. E., Kevrekidis, I. G., Lu, L. et al. Physics-informed machine learning. Nature Reviews Physics, 3, pp. 422440, 2021, https://www.doi.org/10.1038/s42254-021-00314-5
[04381] Learning Functional Priors and Posteriors from Data and Physics
Author(s) :
Xuhui Meng (Huazhong University of Science and Technology)
Abstract : We develop a new Bayesian framework based on deep generative models to quantify uncertainties arising from both noisy and gappy data in predictions of physics-informed neural networks (PINNs) as well as deep operator networks (DeepONets). We test the proposed method for (1) forward/inverse PDE problems; (2) PDE-agnostic physical problems, e.g., 100-dimensional Darcy problem. The results demonstrate that the proposed approach can provide accurate predictions as well as uncertainties given limited and noisy data.
[03134] AI for Combustion
Author(s) :
Zhiqin Xu (Shanghai Jiao Tong University)
Abstract : The development of detailed chemistry mechanisms of hydrocarbon fuels paves the way to realistic simulations of practical combustors. However, due to chemistry stiffness, the simulation of large-size detailed mechanisms become forbiddingly expensive, especially for very large-scale simulation. In this talk, I will introduce a deep learning based model reduction method for simplifying chemical kinetics. We also use a deep learning based method to overcome the limitation of using small step-size in simulating the combustion ODE systems.
[03476] DOSnet as a Non-Black-Box PDE Solver: When Deep Learning Meets Operator Splitting
Author(s) :
Yuan Lan (Huawei Theory Lab)
Zhen Li (Huawei Theory Lab)
Jie Sun (Huawei Theory Lab)
Yang Xiang (Hong Kong University of Science and Technology)
Abstract : Deep neural networks (DNNs) recently emerged as a promising tool for analyzing and solving complex differential equations arising in science and engineering applications. Alternative to traditional numerical schemes, learning-based solvers utilize the representation power of DNNs to approximate the input-output relations in an automated manner. However, the lack of physics-in-the-loop often makes it difficult to construct a neural network solver that simultaneously achieves high accuracy, low computational burden, and interpretability. In this work, focusing on a class of evolutionary PDEs characterized by decomposable operators, we show that the classical ``operator splitting'' technique can be adapted to design neural network architectures. This gives rise to a learning-based PDE solver, which we name Deep Operator-Splitting Network (DOSnet). Such non-black-box network design is constructed from the physical rules and operators governing the underlying dynamics, and is more efficient and flexible than the classical numerical schemes and standard DNNs. To demonstrate the advantages of our new AI-enhanced PDE solver, we train and validate it on several types of operator-decomposable differential equations. We also apply DOSnet to nonlinear Schr\"odinger equations which have important applications in the signal processing for modern optical fiber transmission systems, and experimental results show that our model has better accuracy and lower computational complexity than numerical schemes and the baseline DNNs.
[04359] Residual Minimization for PDEs: Failure of PINN and Implicit Bias
Author(s) :
Qixuan Zhou (Shanghai Jiao Tong University)
Tao Luo (Shanghai Jiao Tong University)
Abstract : In this talk, we discuss the performance of PINN methods for problems with discontinuities. For linear elliptic PDEs with discontinuous coefficients, we present by experiments that PINN cannot approximate the true solution. We then prove this by introducing a modified equation. And we point out there is still some pattern behind this failure, which is a type of implicit bias. Finally, we will extend some of these results to quasilinear elliptic equations and systems.
[03475] Feature Flow Regularization: Improving Structured Sparsity in Deep Neural Networks
Author(s) :
YUE WU (The Hong Kong University of Science and Technology)
YUAN LAN (The Hong Kong University of Science and Technology)
Luchan Zhang (Shenzhen University)
Yang Xiang (Hong Kong University of Science and Technology)
Abstract : Pruning is a model compression method that removes redundant parameters and accelerates the inference speed of deep neural networks while maintaining accuracy. We propose a regularization strategy from a new perspective of evolution of features. We propose feature flow regularization (FFR) to penalize the length and total absolute curvature of the trajectories, which implicitly increases the structured sparsity of the parameters. The principle is that short and straight trajectories will lead to an efficient network.
[04428] Asymptotic-Preserving Neural Networks for Multiscale Time-Dependent Linear Transport Equations
Author(s) :
Shi Jin (Shanghai Jiao Tong University)
Zheng Ma (Shanghai Jiao Tong University)
Keke Wu (Shanghai Jiao Tong University)
Abstract : In this paper we develop a neural network for the numerical simulation of time-dependent linear transport equations with diffusive scaling and uncertainties. The goal of the network is to resolve the computational challenges of curse-of-dimensionality and multiple scales of the problem. We first show that a standard Physics-Informed Neural Network (PINN) fails to capture the multiscale nature of the problem, hence justifies the need to use Asymptotic-Preserving Neural Networks (APNNs). We show that not all classical AP formulations are directly fit for the neural network approach.We construct a micro-macro decomposition based neural network, and also build in a mass conservation mechanism into the loss function, in order to capture the dynamic and multiscale nature of the solutions. Numerical examples are used to demonstrate the effectiveness of this APNNs.