Registered Data

[00696] Scientific Machine Learning for Inverse Problems

  • Session Time & Room :
    • 00696 (1/3) : 4E (Aug.24, 17:40-19:20) @G808
    • 00696 (2/3) : 5B (Aug.25, 10:40-12:20) @G808
    • 00696 (3/3) : 5C (Aug.25, 13:20-15:00) @G808
  • Type : Proposal of Minisymposium
  • Abstract : Inverse problems address learning complex systems from data. They are ubiquitous in various computational science and engineering areas with grand social impacts, e.g., geophysics, climate change, space missions, and health. Solving an inverse problem requires many solves of the forward model and can be challenging for complex and large-scale problems, e.g., those governed by partial differential equations. Recently, the development of scientific machine learning $(\text{SciML})$ has made tremendous progress in overcoming those challenges. This minisymposium covers progress on $(\text{i})$ the methodology development of SciML-based techniques for inverse problems, and $(\text{ii})$ the applications of SciML methods in solving complex inverse problems.
  • Organizer(s) : Jinlong Wu, Peng Chen
  • Classification : 35R30, 68T07
  • Minisymposium Program :
    • 00696 (1/3) : 4E @G808 [Chair: Jinlong Wu]
      • [03211] Learning Stochastic Closures Using Sparsity-Promoting Ensemble Kalman Inversion
        • Format : Talk at Waseda University
        • Author(s) :
          • Jinlong Wu (University of Wisconsin-Madison)
          • Tapio Schneider (California Institute of Technology)
          • Andrew Stuart (California Institute of Technology)
        • Abstract : Closure models are widely used in simulating complex dynamical systems such as turbulence and climate change, for which direct numerical simulation is often too expensive. Although it is almost impossible to perfectly reproduce the true system with closure models, it is often sufficient to correctly reproduce time-averaged statistics. Here we present a sparsity-promoting, derivative-free optimization method to estimate model error from time-averaged statistics. Specifically, we show how sparsity can be imposed as a constraint in ensemble Kalman inversion (EKI), resulting in an iterative quadratic programming problem. We illustrate how this approach can be used to quantify the model error in the closures of dynamical systems. In addition, we demonstrate the merit of introducing stochastic processes to quantify model error for certain systems. We also present the potential of replacing existing closures with purely data-driven closures using the proposed methodology. The results show that the proposed methodology provides a systematic approach to estimating model error in the closures of dynamical systems.
      • [03315] Efficient Bayesian Physics Informed Neural Networks for Inverse Problems via Ensemble Kalman Inversion
        • Format : Talk at Waseda University
        • Author(s) :
          • xueyu zhu (Department of Mathematics, University of Iowa)
          • andrew pensoneault (Department of Mathematics, University of Iowa)
        • Abstract : Bayesian Physics Informed Neural Networks (B-PINNs) have gained significant attention for PDE-based inverse problems. Existing inference approaches are either computationally expensive for high-dimensional posterior inference or provide unsatisfactory uncertainty estimates. In this paper, we present a new efficient inference algorithm for B-PINNs that uses Ensemble Kalman Inversion (EKI). We find that our proposed method can achieve inference results with informative uncertainty estimates comparable to Hamiltonian Monte Carlo (HMC)-based B-PINNs with a much reduced computational cost.
      • [02563] Neural operator acceleration of PDE-constrained Bayesian inverse problems: Error estimation and correction
        • Format : Talk at Waseda University
        • Author(s) :
          • Lianghao Cao (The University of Texas at Austin)
          • Thomas O'Leary-Roseberry (The University of Texas at Austin)
          • Prashant K. Jha (The University of Texas at Austin)
          • J. Tinsley Oden (The University of Texas at Austin)
          • Omar Ghattas (The University of Texas at Austin)
        • Abstract : In this talk, we explore using neural operators to accelerate infinite-dimensional Bayesian inverse problems (BIPs) governed by nonlinear parametric partial differential equations (PDEs). Neural operators have gained attention in recent years for their ability to approximate nonlinear mappings between function spaces, particularly the parameter-to-solution mappings of PDEs. On the one hand, the computational cost of BIPs can be drastically reduced if the large number of PDE solves required in posterior characterization are replaced with evaluations of trained neural operators. On the other hand, reducing error in the resulting BIP solutions via reducing approximation error of the neural operators in training can be challenging and unreliable. We provide an a-priori error bound result that implies certain BIPs can be ill-conditioned to the approximation error of neural operators, thus leading to inaccessible accuracy requirements in training. To reliably reduce error of neural operator predictions to be used in BIPs, we consider correcting predictions of a trained neural operator by solving a linear variational problem based on the PDE residual. We show that a trained neural operator with error correction can possibly achieve a quadratic reduction of its approximation error. Finally, we provide a numerical example based on the deformation of hyperelastic materials. We demonstrate that the posterior representation produced using neural operators is greatly and consistently enhanced by the error correction, while still retaining substantial computational speed ups.
      • [01533] Ensemble Kalman inversion with dropout in Scientific Machine Learning for Inverse Problems
        • Format : Talk at Waseda University
        • Author(s) :
          • Shuigen Liu (National University of Singapore)
          • Sebastian Reich (Universität Potsdam)
          • Xin Thomson Tong (National University of Singapore)
        • Abstract : Ensemble Kalman inversion (EKI) is an ensemble-based method to solve inverse problems. However, EKI can face difficulties when dealing with high-dimensional problems using a fixed-size ensemble, due to its subspace property where the ensemble always lives in the subspace spanned by the initial ensemble. To address this issue, we propose a novel approach using dropout technique to mitigate the subspace problem. Compared to the conventional localization approach, dropout avoids the complex designs in the localization process. We prove that EKI with dropout converges in the small ensemble settings, and the complexity of the algorithm scales linearly with dimension. Numerical examples demonstrate the effectiveness of our approach.
    • 00696 (2/3) : 5B @G808 [Chair: Peng Chen]
      • [03213] Projected variational inference for high-dimensional Bayesian inverse problems
        • Format : Online Talk on Zoom
        • Author(s) :
          • Peng Chen (Georgia Institute of Technology)
        • Abstract : In this talk, I will present a class of transport-based projected variational methods to tackle the computational challenges of the curse of dimensionality and unaffordable evaluation cost for high-dimensional Bayesian inverse problems governed by complex models. We project the high-dimensional parameters to intrinsically low-dimensional data-informed subspaces, and employ transport-based variational methods to push samples drawn from the prior to a projected posterior. Moreover, we employ fast surrogate models to approximate the parameter-to-observable map. I will present error bounds for the projected posterior distribution measured in Kullback--Leibler divergence. Numerical experiments will be presented to demonstrate the properties of our methods, including improved accuracy, fast convergence with complexity independent of the parameter dimension and the number of samples, strong parallel scalability in processor cores, and weak data scalability in data dimension.
      • [03215] Multifidelity deep neural operators for efficient learning of partial differential equations with application to fast inverse design of nanoscale heat transport
        • Format : Online Talk on Zoom
        • Author(s) :
          • Lu Lu (University of Pennsylvania)
          • Min Zhu (University of Pennsylvania)
        • Abstract : Deep neural operators can learn operators mapping between infinite-dimensional function spaces via deep neural networks and have become an emerging paradigm of scientific machine learning. However, training neural operators usually requires a large amount of high-fidelity data, which is often difficult to obtain in real engineering problems. Here we address this challenge by using multifidelity learning, i.e., learning from multifidelity data sets. We develop a multifidelity neural operator based on a deep operator network (DeepONet). A multifidelity DeepONet includes two standard DeepONets coupled by residual learning and input augmentation. Multifidelity DeepONet significantly reduces the required amount of high-fidelity data and achieves one order of magnitude smaller error when using the same amount of high-fidelity data. We apply a multifidelity DeepONet to learn the phonon Boltzmann transport equation (BTE), a framework to compute nanoscale heat transport. By combining a trained multifidelity DeepONet with genetic algorithm or topology optimization, we demonstrate a fast solver for the inverse design of BTE problems.
      • [02629] Surrogate modeling for many-body hydrodynamic interactions via graph neural networks
        • Format : Online Talk on Zoom
        • Author(s) :
          • Wenxiao Pan (University of Wisconsin-Madison)
        • Abstract : This talk presents a new framework, the hydrodynamic interaction graph neural network (HIGNN), for fast simulation of particulate suspensions. It generalizes the state-of-the-art GNN by 1) introducing higher-order structures in graph and 2) reducing the scaling of its prediction cost down to quasi-linear. The HIGNN, once constructed with low training cost, permits fast predictions of the particles' velocities and is transferable across suspensions of different numbers/concentrations of particles subject to any external forcing.
      • [04531] A practical use of neural density estimators for Bayesian experimental design
        • Format : Online Talk on Zoom
        • Author(s) :
          • Rafael orozco (Georgia Institute of Technology)
          • Mathias Louboutin (Georgia Institute of Technology)
          • Felix Herrmann (Georgia Institute of Technology)
        • Abstract : Neural density estimation is a powerful approach for learning conditional distributions, including Bayesian posteriors in inverse problems. While Bayesian statisticians find these methods promising, some practitioners remain skeptical about their practicality compared to deterministic solutions. We present a practical use case that exploits the posterior entropy minimization properties of conditional neural density estimators to identify optimal experimental designs. By utilizing normalizing flows, we demonstrate our technique’s scalability for tackling realistic 2D and 3D inverse problems.
    • 00696 (3/3) : 5C @G808 [Chair: Jinlong Wu]
      • [03214] Solving High-dimensional Inverse Problems with Weak Adversarial Networks
        • Format : Online Talk on Zoom
        • Author(s) :
          • yaohua zang
          • Yaohua Zang (Zhejiang University)
          • Gang Bao (Zhejiang University)
          • Xiaojing Ye (Georgia State University)
          • Haomin Zhou (Georgia Institute of Technology)
        • Abstract : We present a weak adversarial network approach to numerically solve a class of inverse problems. The weak formulation of PDE in the inverse problem is leveraged with DNNs and induces a minimax problem. Then, the solution can be solved by finding the saddle points in the network parameters. As the parameters are updated, the network gradually approximates the solution of the inverse problem. Numerical experiments demonstrate the promising accuracy and efficiency of this approach.
      • [01482] Automatic discovery of low-dimensional dynamics underpinning time-dependent PDEs for inverse problems resolution
        • Format : Online Talk on Zoom
        • Author(s) :
          • Francesco Regazzoni (MOX, Dipartimento di Matematica, Politecnico di Milano)
          • Matteo Salvador (MOX, Dipartimento di Matematica, Politecnico di Milano)
          • Stefano Pagani (MOX, Dipartimento di Matematica, Politecnico di Milano)
          • Luca Dede' (MOX, Dipartimento di Matematica, Politecnico di Milano)
          • Alfio Quarteroni (MOX, Dipartimento di Matematica, Politecnico di Milano)
        • Abstract : We present a novel Machine Learning technique able to learn differential equations that surrogate the solution of space-time-dependent problems. Our method exploits a finite number of latent variables, providing a compact representation of the system state, automatically discovered during training. It allows building, in a fully non-intrusive manner, surrogate models accounting for the dependence on parameters and time-dependent inputs. As such, our method is suitable to accelerate the resolution of inverse problems.