Registered Data

[00825] Numerical Time Integration Algorithms and Software for Machine Learning

  • Session Time & Room : 4E (Aug.24, 17:40-19:20) @E508
  • Type : Proposal of Minisymposium
  • Abstract : Decades of research and development in numerical time integration algorithms and software has been focused on solving time-dependent differential equations that arise from mathematical models of physical phenomena. Recently, a clear nexus between time integration and machine learning (ML) has been established, giving rise to many new opportunities for novel time-integration algorithm research and software development. The goal of this minisymposium is to shine a light on the ML application area by featuring talks that demonstrate numerical time integration algorithms and software benefiting ML, or discuss how they can be geared towards ML. Prepared by LLNL under Contract DE-AC52-07NA27344. LLNL-ABS-843420.
  • Organizer(s) : Cody Balos, Richard Archibald
  • Classification : 65L99, 65M99, 68T07, 65C99
  • Minisymposium Program :
    • 00825 (1/1) : 4E @E508 [Chair: Cody Balos]
      • [04991] The Roles of Numerical Time Integration Algorithms and Software in the Machine Learning Revolution
        • Format : Talk at Waseda University
        • Author(s) :
          • Cody Balos (Lawrence Livermore National Lab)
        • Abstract : Recently large language models like OpenAI’s ChatGPT have sparked mainstream discussion of Artificial Intelligence and Machine Learning. Meanwhile, in the scientific community there has been an increased interest in Scientific Machine Learning (SciML) and a substantial shift in funding opportunities towards work with at least some ML component. In this talk, I will explore some examples of how numerical time integration algorithms and software, which have been critical to scientific computing for decades, are playing a part in this ML revolution. As part of this exploration, I will also discuss what we are doing in the SUNDIALS time integration library to enable ML applications. LLNL-ABS-847841.
      • [03318] TransNet: Transferable Neural Networks for Partial Differential Equations
        • Format : Online Talk on Zoom
        • Author(s) :
          • Zezhong Zhang (Florida State University)
          • Feng Bao (Florida State University)
          • Lili Ju (University of South Carolina)
          • Guannan Zhang (Oak Ridge National Laboratory)
        • Abstract : Transfer learning for partial differential equations (PDEs) is to develop a pre-trained neural network that can be used to solve a wide class of PDEs. Existing transfer learning approaches require much information of the target PDEs such as its formulation and/or data of its solution for pre-training. In this work, we propose to construct transferable neural feature spaces from purely function approximation perspectives without using PDE information. The construction of the feature space involves re-parameterization of the hidden neurons and uses auxiliary functions to tune the resulting feature space. Theoretical analysis shows the high quality of the produced feature space, i.e., uniformly distributed neurons. Extensive numerical experiments verify the outstanding performance of our method, including significantly improved transferability, e.g., using the same feature space for various PDEs with different domains and boundary conditions, and the superior accuracy, e.g., several orders of magnitude smaller mean squared error than the state of the art methods.
      • [04020] Dissipative residual layers for unsupervised implicit parameterization of data manifolds
        • Format : Online Talk on Zoom
        • Author(s) :
          • Viktor Reshniak (Oak Ridge National Laboratory)
        • Abstract : We propose an unsupervised technique for implicit parameterization of data manifolds. In our approach, the data is assumed to belong to a lower dimensional manifold in a higher dimensional space, and the data points are viewed as the endpoints of the trajectories originating outside the manifold. Under this assumption, the data manifold is an attractive manifold of a dynamical system to be estimated. We parameterize such a dynamical system with a residual neural network and propose a spectral localization technique to ensure it is locally attractive in the vicinity of data. We also present initialization and discuss the regularization of the proposed residual layers that we call dissipative bottlenecks.
      • [03338] Improved Parallelism and Memory Performance for Differentiating Stiff Differential Equations
        • Format : Online Talk on Zoom
        • Author(s) :
          • Christopher Vincent Rackauckas (Julia Hub, Pumas-AI, MIT)
        • Abstract : Previous work demonstrated trade-offs in performance, numerical stability, and memory usage for ODE solving and differentiation of solutions. Our new time stepping methods expose more parallelism is shown to accelerate small ODE solves, while new GPU-based ODE solvers demonstrate a 10x performance improvement over Jax and PyTorch-based solvers. New adjoint methods achieve linear cost scaling with respect to parameters in stiff ODEs, as opposed to the cubic of Jax/PyTorch, while limiting the memory scaling.