Registered Data

[00294] Machine Learning and Differential Equations

  • Session Date & Time :
    • 00294 (1/2) : 4C (Aug.24, 13:20-15:00)
    • 00294 (2/2) : 4D (Aug.24, 15:30-17:10)
  • Type : Proposal of Minisymposium
  • Abstract : This Minisymposium aims at exploring the multiple relations between Machine Learning and Differential Equations. For one, it is possible to use Machine Learning to learn the solutions of challenging, high-dimensional or parameterized Differential Equations. On the other hand, some network architectures, like ResNet or Fractional-DNN, can be understood as time discretizations of Differential Equations. This interplay of different research directions leads to exciting problem formulations and the opportunity to benefit from the respective expertise.
  • Organizer(s) : Roland Maier, Evelyn Herberg
  • Classification : 34A25, 49J15, 65N30, 68T07
  • Speakers Info :
    • Enrique Zuazua (FAU Erlangen)
    • Aras Bacho (LMU München)
    • Randy Price (George Mason University)
    • Daniel Peterseim (Universität Augsburg)
    • Axel Klawonn (Universität Köln)
    • Birgit Hillebrecht (Universität Stuttgart)
    • Sara Bicego (Imperial College London)
    • Evelyn Herberg (Universität Heidelberg)
  • Talks in Minisymposium :
    • [01809] Certified machine learning: Rigorous a posteriori error bounds for physics-informed neural networks
      • Author(s) :
        • Birgit Hillebrecht (SimTech, University of Stuttgart)
        • Benjamin Unger (SimTech, University of Stuttgart)
      • Abstract : Prediction error quantification has been left out of most methodological investigations of neural networks for both purely data-driven and physics-informed approaches. Beyond statistical investigations and generic a-priori results on the approximation capabilities of neural networks, we present a rigorous upper bound on the prediction error of physics-informed neural networks applied to linear PDEs. Our bound can be calculated without knowing the true solution and using only the characteristic properties of the underlying dynamical system.
    • [04004] Control of kinetic collective dynamics by deep neural feedback laws
      • Author(s) :
        • Sara Bicego (Imperial College London)
        • Giacomo Albi (Università degli Studi di Verona)
        • Dante Kalise (Imperial College London)
      • Abstract : We address how to successfully condition high dimensional multi agent systems towards designed cooperative goals via dynamic optimization. The problem reads as the minimization of a cost functional subject to individual-based dynamics; thus, its solution becomes unfeasible as the number of agents grows. We propose a NN-accelerated Boltzmann scheme for approaching the solution from suboptimality. Under the quasi-invariant limit of binary interactions we approximate the mean field PDE governing the dynamics of the agents’ distribution.
    • [04705] An Operator-Learning Approach for Computational Fluid Dynamics
      • Author(s) :
        • Viktor Hermann Grimm (University of Cologne)
        • Axel Klawonn (University of Cologne)
        • Alexander Heinlein (Delft University of Technology (TU Delft))
      • Abstract : We present an operator-learning approach for Computational Fluid Dynamics using Convolutional Neural Networks (CNNs). We aim to approximate the solution operator for the incompressible Navier-Stokes equations in varying geometries using CNNs trained only on the underlying physics. No reference simulations are required for training. We show that our method is able to predict the flow field in various geometries sufficiently accurate and compare its performance to traditional numerical methods.
    • [04777] Dynamic Control in Machine Learning: Geometric Interpretation of deep neural networks for Multi-Classification and Universal Approximation.
      • Author(s) :
        • Martin Sebastian Hernandez Salinas (Friedrich-Alexander-Universität Erlangen-Nürnberg)
        • Enrique Zuazua (Friedrich-Alexander-Universität Erlangen-Nürnberg)
      • Abstract : In this talk, we will present recent results on the interplay between control and Machine Learning. We analyze the Residual Neural Networks architecture and the Multilayer Perceptron with minimal width. Adopting a dynamic control and geometric interpretation of the neural networks, we train them in a constructive manner to solve multi-classification problems and achieve simultaneous controllability. We also derive the so-called universal approximation theorems in $L^p$ spaces for both architectures.
    • [05167] Fourier Neural Poisson Reconstruction
      • Author(s) :
        • Aras Bacho
        • Héctor Andrade Loarca (Ludwig-Maximilians-Universität München)
        • Julius Hege (Ludwig-Maximilians-Universität München)
        • Gitta Kutyniok (Ludwig-Maximilians-Universität München)
      • Abstract : 3D Shape Poisson reconstruction is a method for recovering a 3D mesh from an oriented point cloud by solving the Poisson equation. It is widely used in industrial and academic 3D reconstruction applications, but typically requires a large number of points for a reasonable reconstruction. In this talk, I will present a new approach that utilizes Fourier Neural Operators to improve Poisson reconstruction in the low and middle-sampling regime. This method outperforms existing methods in terms of reconstructing fine details and is also resolution-agnostic. This allows for training the network at lower resolutions with less memory usage and evaluating it at higher resolutions with similar performance with much less data points. Furthermore, we demonstrate that the Poisson reconstruction problem is well-posed on a theoretical level by providing a universal approximation theorem for the Poisson problem with distributional data utilizing the Fourier Neuronal Operator which underpins our practical findings.
    • [05373] Adaptive Time Stepping in Deep Neural Networks
      • Author(s) :
        • Harbir Antil (George Mason University)
        • Hugo Diaz (University of Delaware)
        • Evelyn Herberg (University Heidelberg)
      • Abstract : We highlight the common features of optimal control problems with partial differential equations and deep learning problems. Furthermore, we introduce a new variable in the neural network architecture, which can be interpreted as a time step-size. The proposed framework can be applied to any of the existing networks such as ResNet or Fractional-DNN. This framework is shown to help overcome the vanishing and exploding gradient issues. The proposed approach is applied to an ill-posed 3D-Maxwell's equation.