Registered Data

[00719] Recent Advances in Numerical PDE and Scientific Machine Learning

  • Session Time & Room :
    • 00719 (1/2) : 3E (Aug.23, 17:40-19:20) @E702
    • 00719 (2/2) : 4C (Aug.24, 13:20-15:00) @E702
  • Type : Proposal of Minisymposium
  • Abstract : Artificial intelligence is constantly evolving and researchers are utilizing deep neural networks in increasingly complex problem sets. To address the difficulties posed by these new setups, deep learning research is exploring new modeling tools, including differential equations, to improve the predictive capabilities of neural networks. These technologies have demonstrated potential in speeding up scientific simulations and achieved state-of-the-art performance in various fields. This minisymposium will focus on recent developments in the intersection of scientific computing and deep learning, highlighting their impact and application across multiple disciplines.
  • Organizer(s) : Minseok Choi, Youngjoon Hong
  • Classification : 65M22, 68T07, 65K05, 68T09, Scientific Machine Learning
  • Minisymposium Program :
    • 00719 (1/2) : 3E @E702 [Chair: Youngjoon Hong]
      • [04237] Level set learning for nonlinear dimensionality reduction in function approximation
        • Format : Talk at Waseda University
        • Author(s) :
          • Zhu Wang (University of South Carolina)
        • Abstract : Approximating high-dimensional functions is challenging due to the curse of dimensionality. Inspired by the Nonlinear Level set Learning method that uses the reversible residual network, we developed a new method, Dimension Reduction via Learning Level Sets, for function approximations. It contains two major components: one is the pseudo-reversible neural network module that effectively transforms high-dimensional input variables to low-dimensional active variables, and the other is the synthesized regression module for approximating function values based on the transformed data in the low-dimensional space. Numerical experiments will be presented to demonstrate the proposed method.
      • [03271] Semi-analytic PINN methods for boundary layer problems on rectangular domains
        • Format : Talk at Waseda University
        • Author(s) :
          • Chang-Yeol Jung (UNIST)
          • Gung-Min Gie (University of Louisville)
          • Youngjoon Hong (Sungkyunkwan University)
          • Tselmuun Munkhjin (UNIST)
        • Abstract : Singularly perturbed boundary value problems exhibit sharp boundary layers in their solutions, making their numerical approximation challenging due to the stiffness of these layers, resulting in significant computational errors. Traditional numerical methods require extensive mesh refinements near the boundary to obtain accurate solutions, which can be costly in terms of computation. To address these challenges, we have employed physics-informed neural networks (PINNs) to solve singularly perturbed problems. However, PINNs can struggle with rapidly varying singularly perturbed solutions over a small domain region, resulting in insufficient resolution that can lead to inaccurate results. To overcome this limitation, we consider the semi-analytic methods which enrich the PINNs with so-called corrector functions. Our numerical experiments demonstrate significant improvements in accuracy and stability.
      • [04847] Solving Wave Equations with Fourier Neural Operator
        • Format : Talk at Waseda University
        • Author(s) :
          • Bian Li (Lehigh University)
          • Hanchen Wang (Los Alamos National Lab)
          • Shihang Feng (Los Alamos National Lab)
          • Xiu Yang (Lehigh University)
          • Youzuo Lin (Los Alamos National Lab)
        • Abstract : In the study of subsurface seismic imaging, solving the acoustic wave equation is a pivotal component in existing models. Inspired by the idea of operator learning, this work leverages the Fourier neural operator (FNO) to effectively learn the frequency domain seismic wavefields under the context of variable velocity models. We also propose a new framework paralleled Fourier neural operator (PFNO) for efficiently training the FNO-based solver given multiple source locations and frequencies.
      • [04931] Physics-informed variational inference for stochastic differential equations
        • Format : Talk at Waseda University
        • Author(s) :
          • Hyomin Shin (POSTECH)
          • Minseok Choi (POSTECH)
        • Abstract : In this talk, we propose a physics-informed learning based on variational autoencoder (VAE) to solve data-driven stochastic differential equations. We adopt VAE to extract the random state of the governing equation, and train the model by maximizing the evidence lower bound that incorporates the given physical laws. We present numerical examples to demonstrate the effectiveness of the proposed method.
    • 00719 (2/2) : 4C @E702 [Chair: Minseok Choi]
      • [03076] Deep neural operator for learning transient response of composites subject to dynamic loading
        • Format : Online Talk on Zoom
        • Author(s) :
          • Zhen Li (Clemson University)
          • Minglei Lu (Clemson University)
          • Ali Mohammadi (Clemson University)
          • Zhaoxu Meng (Clemson University)
          • Gang Li (Clemson University)
        • Abstract : Deep neural operator (DNO) is used learn the transient response of composites as surrogate of physics-based finite element analysis (FEA). We consider a 3D composites beam formed by two metals with different Young's modulus subject to dynamic loads. DNO is trained using sequence-to-sequence learning with incremental learning methods based on 5000 FEA data, leading to a 100X speedup. Results show that DNO can predict the transient mechanical response of composites at an accuracy of 97%.
      • [05598] Analysis of the derivative-free method for solving PDEs using neural networks
        • Format : Talk at Waseda University
        • Author(s) :
          • Jihun Han (Dartmouth College)
          • Yoonsang Lee (Dartmouth College)
        • Abstract : The derivative-free loss method (DFLM) uses a stochastic (Feynman-Kac) formulation to solve a certain class of PDEs using neural networks. The method avoids the derivative calculation from neural networks, using statistical information of local walkers to represent the solution. This work analyzes the effect of the time step and the number of walkers in DFLM. The analysis shows a lower bound for the time step to guarantee a certain accuracy, which contrasts the standard numerical methods with an upper bound. We also show a linear dependence of the walker in the accuracy.
      • [03762] Convergence analysis of unsupervised Legendre-Galerkin neural networks for linear second-order elliptic PDEs
        • Format : Talk at Waseda University
        • Author(s) :
          • Seungchan Ko (Inha University)
          • Seok-Bae Yun (Sungkyunkwan University)
          • Youngjoon Hong (Sungkyunkwan University)
        • Abstract : In this talk, I will discuss the convergence analysis of unsupervised Legendre-Galerkin neural networks (ULGNet), a deep-learning-based numerical method for solving partial differential equations (PDEs). Unlike existing deep learning-based numerical methods for PDEs, the ULGNet expresses the solution as a spectral expansion with respect to the Legendre basis and predicts the coefficients with deep neural networks by solving a variational residual minimization problem. Using the fact that the corresponding loss function is equivalent to the residual induced by the linear algebraic system depending on the choice of basis functions, we prove that the minimizer of the discrete loss function converges to the weak solution of the PDEs. Numerical evidence will also be provided to support the theoretical result. Key technical tools include the variant of the universal approximation theorem for bounded neural networks, the analysis of the stiffness and mass matrices, and the uniform law of large numbers in terms of the Rademacher complexity.
      • [04540] Bi-orthogonal fPINN: A physics-informed neural network method for solving time-dependent stochastic fractional PDEs
        • Format : Talk at Waseda University
        • Author(s) :
          • Lei Ma (Shanghai normal university)
        • Abstract : Mathematical models considering nonlocal interactions with uncertainty quantification can be formulated as stochastic fractional partial differential equations (SFPDEs). There are many challenges in solving SFPDEs numerically, especially for long-time integration. Here, we combine the bi-orthogonal (BO) method for representing stochastic processes with physics-informed neural networks (PINNs) for solving partial differential equations to formulate the bi-orthogonal PINN method (BO-fPINN) for solving time-dependent SFPDEs. We demonstrate the effectiveness of the BO-fPINN method for different benchmark problem.