Registered Data

[00703] Combining machine learning with domain decomposition and multilevel methods

  • Session Time & Room :
    • 00703 (1/2) : 4E (Aug.24, 17:40-19:20) @E812
    • 00703 (2/2) : 5B (Aug.25, 10:40-12:20) @E812
  • Type : Proposal of Minisymposium
  • Abstract : In this minisymposium, recent advances in using machine learning in domain decomposition and multilevel methods will be discussed as well as applying domain decomposition and multilevel techniques to improve different aspects of machine learning algorithms.
  • Organizer(s) : Victorita Dolean, Alexander Heinlein, Axel Klawonn, Rolf Krause
  • Classification : 68T07, 65M55, 65N55, 65K10
  • Minisymposium Program :
    • 00703 (1/2) : 4E @E812 [Chair: Axel Klawonn]
      • [03764] A Domain Decomposition-Based CNN-DNN Architecture for Model Parallel Training
        • Format : Talk at Waseda University
        • Author(s) :
          • Axel Klawonn (University of Cologne)
          • Martin Lanser (University of Cologne)
          • Janine Weber (University of Cologne)
        • Abstract : In this talk, a novel domain decomposition-based CNN-DNN (convolutional/deep neural network) architecture is presented that naturally supports a model parallel training strategy and that is loosely inspired by two-level domain decomposition methods. Experimental results for different 2D image classification problems are shown as well as for the classification of 3D computer tomography (CT) scans. The results show that the proposed approach can significantly accelerate the required training time without losing accuracy in most cases.
      • [03312] DNN-MG: A Hybrid Neural Network/Finite Element Method
        • Format : Talk at Waseda University
        • Author(s) :
          • Nils Margenberg (Helmut Schmidt University Hamburg)
          • Robert Jendersie (Otto von Guericke University Magdeburg)
          • Christian Lessig (Otto von Guericke University Magdeburg)
          • Thomas Richter (Otto von Guericke University Magdeburg)
        • Abstract : The Deep Neural Network Multigrid Solver (DNN-MG) augments classical finite element simulations in fluid-dynamics by deep neural networks to improve the computational efficiency. To achieve this, it combines a geometric multigrid solver with a DNN that is used when a full resolution of the effects is not feasible or efficient. Our method's efficiency, generalizability, and scalability is demonstrated through applications to 3D benchmark simulations of the Navier-Stokes equations.
    • 00703 (2/2) : 5B @E812
      • [04367] Improved Accuracy of Physics-Informed Neural Networks Using a Two-Level Training Approach and Lagrange Multipliers
        • Format : Online Talk on Zoom
        • Author(s) :
          • Deok-Kyu Jang (Kyung Hee University)
          • Kyungsoo Kim (Kyung Hee University)
          • Hyea Hyun Kim (Kyung Hee University)
        • Abstract : In this talk, we introduce efficient techniques to enhance accuracy of Physics-Informed Neural Networks (PINNs) for solving second-order elliptic problems. We first present a two-level training approach incorporating a scaling process to capture high-frequency solution components more effectively at the first training stage, and a post-processing residual training step to resolve the remaining low-frequency components. We also introduce a non-overlapping domain decomposition method for PINNs where we employ Lagrange multipliers to enforce suitable interface conditions and boundary conditions so as to improve the solution accuracy further. We demonstrate the effectiveness of our proposed methods through numerical test examples.
      • [04378] A Splitting Approach of Multilevel Optimization with an Application to Physics Informed Neural Networks
        • Format : Talk at Waseda University
        • Author(s) :
          • Valentin Mercier (Université de Toulouse, IRIT, CERFACS, BRLi)
          • Serge Gratton (Université de Toulouse, INP-ENSEEIHT, IRIT,ANITI)
          • Philippe Toint (Namur Center for Complex Systems (naXys), University of Namur)
          • Elisa Riccietti (Université de Lyon, INRIA, EnsL, UCBL, CNRS)
        • Abstract : We propose a multilevel optimization algorithm, based on coordinate-block descent, to solve nonlinear problems while maintaining the advantages of multilevel methods. We demonstrate its effectiveness in solving complex Poisson problems using neural networks (NN) with PINN's method. We address the unique challenges posed by NNs, such as the F-principle, by employing frequency-aware network architectures. Overall, our approach offers a cost-effective solution for solving complex nonlinear optimization problems using neural networks.
      • [05268] Combining physics-informed neural networks with multilevel domain decomposition
        • Format : Online Talk on Zoom
        • Author(s) :
          • Alexander Heinlein (Delft University of Technology (TU Delft))
          • Victorita Dolean Maini (University of Strathclyde)
          • Siddhartha Mishra (ETH Zurich)
          • Ben Moseley (ETH Zurich)
        • Abstract : Physics-informed neural networks (PINNs) are a powerful approach for solving problems related to differential equations. However, PINNs often struggle to solve differential equations when they have high frequency and/or multi-scale solutions. In this work, we improve the performance of PINNs in this regime by combining them with domain decomposition. We build on the existing finite basis physics-informed neural networks (FBPINNs) framework and show that adding multilevel modelling to FBPINNs improves their performance.
      • [04823] Enhancing training of scientific machine learning applications
        • Format : Talk at Waseda University
        • Author(s) :
          • Alena Kopanicakova (Brown University)
        • Abstract : Scientific machine learning has shown potential in creating efficient surrogates for complex multiscale and multiphysics problems. However, the computational cost of training these surrogates is prohibitively high.We propose a training procedure that utilizes the layer-wise decomposition of a deep neural network in order to construct a nonlinear preconditioner for the standard L-BFGS optimizer.The convergence properties of the novel training method will be analyzed by means of numerical experiments.