Abstract : There are recently a lot of exciting new computational approaches with the aim of solving practical and challenging subsurface applications such as multiphase flow in a fractured reservoir as well as geothermal modeling with heat conduction. The aim of this mini-symposium is to review recent progress in data-driven and model-reduction methods like multiscale methods, numerical upscaling techniques, and learning-based algorithms for related applications and motivate new research directions in solving challenging problems from the field of computational geosciences.
Organizer(s) : Siu Wun Cheung, Wing Tat Leung, Sai-Mang Pun
[04298] Deep Learning Methods for PDEs and Reduced Order Models
Format : Talk at Waseda University
Author(s) :
Min Wang (University of Houston)
Abstract : In this talk, we will discuss the use of neural networks to solve high-dimensional partial differential equations (PDEs) without being affected by the curse of dimensionality. We will explore three key questions: (1) How to formulate PDE problems as optimization problems for deep learning techniques, (2) The accuracy of neural network approximations, and (3) Systematic training for global minimum convergence. In specific, We will present various optimization formulations for the high-dimensional quadratic porous medium equation, analyze generalization and approximation errors for Ritz methods, and propose an adaptive optimization strategy for training residual neural networks. Numerical results will be provided to demonstrate the effectiveness of the proposed methods.
[03429] Nonlocal multicontinua with representative volume elements
Format : Talk at Waseda University
Author(s) :
Wing Tat Leung (City University of Hong Kong)
Abstract : In this talk, we present a general derivation of multicontinuum equations and discuss cell problems. We present constraint cell problem formulations in a representative volume element and oversampling techniques that allow reducing boundary effects. We discuss different choices of constraints for cell problems. We present numerical results that show how oversampling reduces boundary effects. Finally, we discuss the relation of the proposed methods to our previously developed methods, Nonlocal Multicontinuum Approaches.
[05249] Physics-informed neural networks for learning the homogenized coefficients of multiscale elliptic equations
Format : Online Talk on Zoom
Author(s) :
Jun Sur Richard Park (KAIST)
Xueyu Zhu (Department of Mathematics, University of Iowa)
Abstract : Multiscale elliptic equations with scale separation are often approximated by the corresponding homogenized equations with slowly varying homogenized coefficients (the G-limit). The traditional homogenization techniques typically rely on the periodicity of the multiscale coefficients, thus finding the G-limits often requires sophisticated techniques in more general settings even when the multiscale coefficient is known, if possible. Our approach adopts physics-informed neural networks (PINNs) algorithm to estimate the G-limits from the multiscale solution data by leveraging a priori knowledge of the underlying homogenized equations. Unlike the existing approaches, our approach does not rely on the periodicity assumption or the known multiscale coefficient during the learning stage. We demonstrate that the proposed approach can deliver reasonable and accurate approximations to the G-limits as well as homogenized solutions through several benchmark problems.
[02709] Optimality of statistical criterion in hyper-reduction
Format : Talk at Waseda University
Author(s) :
Siu Wun Cheung (Lawrence Livermore National Laboratory)
Abstract : While projection-based reduced order models can reduce the dimension of solutions, there may still be nonlinear terms which scale with the full order dimension. Hyper-reduction techniques are sampling-based methods that further reduce computational complexity of nonlinear terms. In this talk, we will view the state-of-the-art Discrete Empirical Interpolation Method from the perspective of optimal design, and introduce a new hyper-reduction method based on optimality of another statistical criterion.
[04174] Least-squares Method for Recovering Multiple Medium Parameters
Format : Talk at Waseda University
Author(s) :
Ying Liang (Purdue University)
Abstract : We present a two-stage least-squares method for inverse medium problems of reconstructing multiple unknown coefficients simultaneously from noisy data. A direct sampling method is applied to detect the location of the inhomogeneity in the first stage, while a total least-squares method with a mixed regularization is used to recover the medium profile in the second stage. The total least-squares method is designed to minimize the residual of the model equation and the data fitting, along with an appropriate regularization, in an attempt to significantly improve the accuracy of the approximation obtained from the first stage. We shall also present an analysis on the well-posedness and convergence of this algorithm. Numerical experiments are carried out to verify the accuracies and robustness of this novel two-stage least-squares algorithm, with high tolerance of noise in the data.
[03801] Adaptive partially explicit splitting scheme for multiscale flow problems
Format : Online Talk on Zoom
Author(s) :
Yating Wang (Xi'an Jiaotong University)
Wing Tat Leung (City University of Hong Kong)
Abstract : In this talk, we will introduce an adaptive framework for a partially explicit splitting scheme of flow problems in high-contrast multiscale media. Due to the heavy computational burden is for the multiscale coefficient with high-contrast, in this work, we utilize a stable multirate temporal splitting scheme, and construct multiscale subspaces to handle the fast flow and slow flow parts separately. The construction of multiscale spaces ensures that the time-step size is independent of the contrast. We then derive both temporal and spatial error estimators to identify local regions where enrichments are needed for two components of the solutions. An adaptive algorithm is then proposed to achieve higher computational efficiency with the desired accuracy.