Registered Data

[00087] Intersection of Machine Learning, Dynamical Systems and Control

  • Session Time & Room :
    • 00087 (1/2) : 2C (Aug.22, 13:20-15:00) @A615
    • 00087 (2/2) : 2D (Aug.22, 15:30-17:10) @A615
  • Type : Proposal of Minisymposium
  • Abstract : In recent years, the intersection of machine learning, dynamical systems and control has created some new excitement in different disciplines. On the one hand, machine learning-based algorithms have opened up new opportunities in studying dynamical systems and control problems, particularly in high dimensions. On the other hand, the controlled dynamical system perspective of deep learning has also brought new insight in machine learning. This minisymposium will bring together experts in different areas to explore these new exciting opportunities. The goal is to stimulate researchers from different communities to think rigorously across disciplines and move toward new questions.
  • Organizer(s) : Jiequn Han, Qianxiao Li, Xiang Zhou
  • Classification : 65Lxx, 65Mxx, 49Mxx, 68T07, Machine Learning, Dynamical Systems, Control Theory
  • Minisymposium Program :
    • 00087 (1/2) : 2C @A615 [Chair: Jiequn Han]
      • [02826] Solving Parametric PDEs by Deep Learning
        • Format : Talk at Waseda University
        • Author(s) :
          • Bin Dong (Peking University)
        • Abstract : Deep learning continues to dominate machine learning and has been successful in computer vision, natural language processing, etc. Its impact has now expanded to many research areas in science and engineering. In this talk, I will present a series of our recent works on combining wisdom from traditional numerical PDE methods and machine learning to design data-driven solvers for parametric PDEs and their applications in fluid simulations. This is joint work with Professor Jinchao Xu, my previous Ph.D. student Yuyan Chen, and my colleagues from Huawei MindSpore AI + Scientific Computing team and the Shanghai Aircraft Design and Research Institute of The Commercial Aircraft Corporation of China.
      • [02805] Training Deep ResNet with Batch Normalization as a First-order Mean Field Type Problem
        • Format : Talk at Waseda University
        • Author(s) :
          • Phillip Sheung Chi Yam (Department of Statistics, Chinese University of Hong Kong)
        • Abstract : In this talk, we shall discuss a numerical scheme for training Deep Residual Networks that incorporates the popular Batch Normalization technique into the recently proposed extended Method of Successive Approximation in the work of Li, Chen, Tai and E, The Journal of Machine Learning Research (2017), 18: 5998–6026, and its effectiveness has been demonstrated by numerical studies. The convergence of this proposed scheme depends on the first-order mean field theory, namely the resolution of the corresponding generic first-order mean field type problems inherited from the augmented Hamiltonian, and we shall introduce this brand-new fundamental theory behind.
      • [02933] Dynamics-Quantified Implicit Biases of Large Learning Rates
        • Format : Talk at Waseda University
        • Author(s) :
          • Molei Tao (Georgia Inistitute of Technology)
        • Abstract : This talk will describe some nontrivial (and pleasant) effects of large learning rates, which are often used in machine learning practice but defy traditional optimization theory. I will first show how large learning rates can lead to quantitative escapes from local minima, via chaos, which is an alternative mechanism to commonly known noisy escapes due to stochastic gradients. I will then report how large learning rates provably bias toward flatter minimizers, which arguably generalize better.
      • [04799] An optimal control perspective on diffusion-based generative modeling leading to robust numerical methods
        • Format : Talk at Waseda University
        • Author(s) :
          • Lorenz Richter (Zuse Institute Berlin, dida)
        • Abstract : This talk establishes a connection between generative modeling based on SDEs and three classical fields of mathematics, namely stochastic optimal control, PDEs and path space measures. Those perspectives will be both of theoretical and practical value, for instance allowing to transfer methods from one to the respective other field or leading to novel algorithms for sampling from unnormalized densities. Further, the connection to HJB equations leads to novel loss functions which exhibit favorable statistical properties and result in improved convergence of respective algorithms.
    • 00087 (2/2) : 2D @A615 [Chair: Qianxiao Li]
      • [04826] Learning high-dimensional feedback laws for collective dynamics control
        • Format : Talk at Waseda University
        • Author(s) :
          • Dante Kalise (Imperial College London)
          • Giacomo Albi (University of Verona)
          • Sara Bicego (Imperial College London)
        • Abstract : We discuss the control of collective dynamics for an ensemble of high-dimensional particles. The collective behaviour of the system is modelled using a kinetic approach, reducing the problem to efficiently sampling binary interactions between controlled agents. However, as individual agents are high-dimensional themselves, the controlled binary interactions correspond to large-scale dynamic programming problems, for which we propose a supervised learning approach based on discrete-time State-dependent Riccati Equations and recurrent neural networks.
      • [05378] Sparse Kernel Flows for Learning 132 Chaotic Dynamical Systems from Data
        • Format : Talk at Waseda University
        • Author(s) :
          • Boumediene Hamzi (Caltech)
          • Lu Yang (Nanjing University of Aeronautics and Astronautics)
          • Xiuwen Sun (Nanjing University of Aeronautics and Astronautics)
          • Houman Owhadi (California Institute of Technology)
          • Naiming Xie (NUAA)
        • Abstract : Regressing the vector field of a dynamical system from a finite number of observed states is a natural way to learn surrogate models for such systems. As shown in previous work, a simple and interpretable way to learn a dynamical system from data is to interpolate its vector-field with a data-adapted kernel which can be learned by using Kernel Flows. The method of Kernel Flows is a trainable machine learning method that learns the optimal parameters of a kernel based on the premise that a kernel is good if there is no significant loss in accuracy if half of the data is used. The objective function could be a short-term prediction or some other objective. However, this method is limited by the choice of the base kernel. In this paper, we introduce the method of Sparse Kernel Flows in order to learn the “best” kernel by starting from a large dictionary of kernels. It is based on sparsifying a kernel that is a linear combination of elemental kernels. We apply this approach to a library of 132 chaotic systems. Presentation based on https://arxiv.org/pdf/2301.10321.pdf
      • [05395] Distributed Control of Partial Differential Equations Using Convolutional Reinforcement Learning
        • Format : Talk at Waseda University
        • Author(s) :
          • Sebastian Peitz (Universität Paderborn)
          • Jan Stenner (Universität Paderborn)
          • Vikas Chidananda (Universität Paderborn)
          • Steven Brunton (UW)
          • Kunihiko Taira (UCLA)
        • Abstract : We present a convolutional framework which significantly reduces the complexity and thus, the computational effort for distributed reinforcement learning control of partial differential equations (PDEs). Exploiting translational invariances, the high-dimensional distributed control problem can be transformed into a multi-agent control problem with many identical agents. Furthermore, using the fact that information is transported with finite velocity in many cases, the dimension of the agents' environment can be drastically reduced using a convolution operation over the state space of the PDE. In this setting, the complexity can be flexibly adjusted via the kernel width or using a stride greater than one. A central question in this framework is the definition of the reward function, which may consist of both local and global contributions. We demonstrate the performance of the proposed framework using several standard PDE examples with increasing complexity, where stabilization is achieved by training a low-dimensional DDPG agent with small training effort.