Abstract : In applied mathematics including inverse problems, problems are often ill-posed and/or data are limited. Such difficulties have been treated in different subfields of science: medical imaging, data assimilation in numerical weather prediction, etc. In this minisymposium, researchers from data assimilation and inverse problems will gather and discuss different tools and ideas in applied mathematics to handle complicated problems and incomplete data.
[03741] Gaussian Assimilation of non-Gaussian Image Data via Pre-Processing by Variational Auto-Encoder (VAE)
Author(s) :
Daisuke Hotta (Meteorological Research Institute, Japan Meteorological Agency)
Abstract : Assimilation of image data such as satellite images with conventional data assimilation methods is challenging due to non-Gaussian error distribution, dimensional redundancy, and strong inter-pixel correlations. While several techniques have been proposed to address each of these issues, no single method can simultaneously handle them all. Here we propose to use a Variational AutoEncoder to resolve all three difficulties. A preliminary assessment with a toy model shows promising results.
[04216] Implementing local ensemble transform Kalman filter to reservoir computing for improving weather forecast
Author(s) :
Mao Ouyang (Chiba University)
Shunji Kotsuki (Chiba University)
Abstract : Data assimilation (DA) improves the numerical weather prediction (NWP) by combining the model forecast and observational data. The forecasts were usually obtained from a physical-based model, but recent studies reported that the reservoir computing (RC) could be implemented to surrogate both the small- and intermediate-scale physical models. This study implemented the DA, i.e., local ensemble transform Kalman filter, in both the physic-based and RC-surrogated models and compared their performances in the improvement of forecasts.
[04315] Sparse optimization of inverse problems regularized with infimal-convolution-type functionals
Author(s) :
Marcello Carioni (University of Twente)
Abstract : The infimal convolution of functionals is a convex-preserving operation that have been used to construct regularizers for inverse problems by optimally combining features of two or more functionals. In this talk, we analyze the infimal convolution regularization from a sparse optimization point of view. First, we discuss optimal transport-type energies. Then, we consider the infimal convolution of a parametrized family of functionals and we develop optimization methods taking advantage of the sparsity in the parameters.
[04562] Efficient data-driven regularization for ill-posed inverse problems in imaging
Author(s) :
Subhadip Mukherjee (University of Bath, UK)
Marcello Carioni (University of Twente)
Ozan Öktem (KTH Royal Institute of Technology)
Carola-Bibiane Schönlieb (University of Cambridge)
Abstract : In recent years, data-driven regularization has led to impressive performance for image reconstruction problems in various scientific applications, e.g., medical imaging. We propose a new adversarial learning approach for imaging inverse problems by combining an iteratively unrolled network with a deep regularizer using ideas from optimal transport. The resulting unrolled adversarial regularization approach is shown to be provably stable, efficient in terms of image reconstruction time, and competitive with supervised methods in empirical performance.
[05386] Inverse problems for nonlocal PDEs with applications to quantum optics
Author(s) :
John Schotland (Yale University)
Abstract : I will discuss recent work with Jeremy Hoskins and Howard Levinson on reconstruction methods for inverse problems for nonlocal PDEs. Applications to quantum optics will be discussed.
[05472] Implicit Ensemble Tangent Linear Models (IETLMs) for model differentiation
Author(s) :
Craig H Bishop (University of Melbourne)
Nathan W Eizenberg (University of Melbourne)
Abstract : Ideally, Tangent Linear Models (TLMs) predict the difference between perturbed and unperturbed non-linear forecasts of interest. The adjoint of a TLM gives the gradient of the non-linear model and is used in 4DVar data assimilation and in adjoint-based Forecast Sensitivity to Observation Impact (FSOI). The Local Ensemble Tangent Linear Model’s (LETLM) accuracy has been shown to be limited by its inability to account for implicit time stepping. Here we derive Implicit Ensemble TLMs (IETLMs) that, at most, require the number of independent ensemble members to be equal to the number of variables in the implicit computational stencil. The accuracy of the IETLM in the linear regime is confirmed using an implicitly time stepped Lorenz 96 model and a 9-member ensemble. IETLMs feature two sparse matrices: matrix N that operates on an initially unknown future time perturbation, and matrix L that operates on the current time perturbation. For ensemble perturbations in the non-linear regime, we develop a Diagonally Robust (DR) IETLM that reduces the chances of N becoming ill-conditioned. The performance of the DR IETLM was compared with traditional TLM performance using IETLM ensemble perturbations whose “Gilmour et al., 2001” measure of non-linearity ranged up to the non-linearity of operational 32 hr ensemble forecast perturbations. Over a wide range of non-linearity, the DR IETLM performance was found to match that of the traditional TLM provided the initial standard deviation of the ensemble perturbations was times the standard deviation of the test perturbations. Ideal FSOI requires the adjoint of a TLM that accurately predicts the known difference between corrected and uncorrected non-linear forecasts. The DR IETLM was found to meet this FSOI accuracy requirement much more closely than the traditional TLM when the ensemble perturbations were created by subtracting the corrected forecast from ensemble members that were centred on the uncorrected forecast. Finally, if time permits, a method for reducing the size of the ensemble required to produce accurate TLMs will be described.
[05482] Advances in Integrating AI and Machine Learning with Data Assimilation for Weather Prediction
Author(s) :
Stephen G Penny (Sofar Ocean)
Abstract : Capabilities of AI/ML methods necessary for online data assimilation (DA), such as accounting for accurate model response to perturbations in initial conditions, will be discussed in the context of a variety of increasing complexity dynamical systems ranging from Lorenz-96, to quasi-geostrophic (QG) dynamics, to surface QG turbulence. In this context, the success of recurrent neural networks for achieving this goal will be demonstrated by integrating with conventional DA methods such as the Ensemble Kalman Filter (EnKF) and 4D-Var. Dynamical invariants such as the Lyapunov spectrum will also be explored both as a useful diagnostic and as tool to accelerate the training of ML models. Caveats regarding training on simulated datasets and reanalysis datasets will also be discussed.
[05497] Learned weakly convex regularisers in inverse problems
Author(s) :
Zakhar Shumaylov (University of Cambridge)
Jeremy Budd (University of Bonn)
Carola-Bibiane Schönlieb (University of Cambridge)
Abstract : In this talk, we consider the problem of learned regularisation in the area of imaging inverse problems. By showing limitations of existing methods arising in adversarial regularisation, we propose usage of weakly convex regularisers to address the problems. We provide a construction of weakly convex input neural networks and discuss convergence guarantees for the variational problem. We provide numerical evidence to exemplify their usage in the settings of sparse and limited angle computed tomography reconstruction.