Abstract : Inverse problems involve the determination of unknown parameters from observational data and mathematical models linking those parameters to the data. Bayesian inference offers a framework to estimate the solution in terms of a posterior probability distribution. Oftentimes, the computation of the posterior requires application of Markov chain Monte Carlo (MCMC) methods. Direct implementation of these techniques becomes a challenge when the target parameters have a particular structure and are high-dimensional. This mini-symposium aims at presenting recent developments in sampling methods and prior/regularization models in statistics and inverse problems, including novel MCMC techniques, Monte Carlo estimators, and priors encoding structural information.
Dootika Vats (Indian Institute of Technology Kanpur)
Flávio Gonçalves (Universidade Federal de Minas Gerais)
Krzysztof Łatuszyński (University of Warwick)
Gareth Roberts (University of Warwick)
Abstract : Accept-reject based Markov chain Monte Carlo (MCMC) algorithms have traditionally utilised acceptance probabilities that can be explicitly written as a function of the ratio of the target density at the two contested points. This feature is rendered almost useless in Bayesian posteriors with unknown functional forms. We introduce a new family of MCMC acceptance probabilities that has the distinguishing feature of not being a function of the ratio of the target density at the two points. We present a stable Bernoulli factory that generates events within this class of acceptance probabilities. The efficiency of our methods rely on obtaining reasonable local upper or lower bounds on the target density and we present an application of MCMC on constrained spaces where this is reasonable.
[03753] Sampling of Student's t and stable priors for edge-preserving Bayesian inversion
Format : Talk at Waseda University
Author(s) :
Felipe Uribe (Lappeenranta-Lahti University of Technology)
Abstract : The identification of sharp features in the solution is a critical aspect of many large-scale Bayesian inverse problems. Markov random field (MRF) priors based on heavy-tailed distributions have proven effective in achieving piecewise constant behavior. This study reexamines the use of Student's t and alpha stable MRFs in this context. To facilitate computation of the resulting posterior distribution, we propose a scale mixture formulation of the MRF priors. This formulation has the advantage of expressing the prior as a conditionally Gaussian distribution that depends on auxiliary hyperparameters. We discuss a Gibbs sampler to solve the hierarchical formulation of the Bayesian inverse problem. The approach is illustrated using applications from imaging science.
[04286] Comparison of pseudo-marginal Markov chains via weak Poincaré inequalities
Format : Talk at Waseda University
Author(s) :
Andi Qi Wang (University of Warwick)
Christophe Andrieu (University of Bristol)
Anthony Lee (University of Bristol)
Sam Power (University of Bristol)
Abstract : I will discuss the use of a certain class of functional inequalities known as weak Poincaré inequalities to bound convergence of Markov chains to equilibrium. This enables the straightforward and transparent derivation of subgeometric convergence bounds for methods such pseudo-marginal MCMC methods for intractable likelihoods, which have been used extensively in the context of Bayesian Inverse Problems.
[04985] CUQIpy: Computational Uncertainty Quantification for Inverse Problems in Python
Format : Talk at Waseda University
Author(s) :
Nicolai André Brogaard Riis (Technical University of Denmark)
Abstract : We present CUQIpy, a versatile open-source Python package for computational uncertainty quantification (UQ) in inverse problems using a Bayesian framework. This talk highlights CUQIpy's high-level modeling framework with concise syntax, enabling intuitive problem specification, and showcasing its efficient sampling strategies, automatic sampler selection, and test problem library. Designed to handle large-scale problems and support various probability distributions, CUQIpy streamlines the UQ process, serving as a powerful tool for a diverse set of inverse problems.
[05558] Gaussian likelihoods for non-Gaussian data
Format : Talk at Waseda University
Author(s) :
Heikki Haario (University of Lappeenranta)
Abstract : Various modelling situations – chaotic dynamics, stochastic differential equations, random patterns by the Turing reaction-diffusion systems, cellular automata– share the analogy that a fixed model parameter corresponds to a family of solutions rather than a fixed deterministic one. This may be due to extreme sensitivity with respect to the initial values, randomized or unknown initial values, or explicit stochasticity of the system. Standard methods based on directly measuring the distance between model and data are no more available. We discuss a unified construction of Gaussian likelihoods for such ‘intractable’ situation, where the raw data is far from Gaussian. Examples cover the cases in the above list of modelling situations.
[05515] Multi-output multilevel best linear unbiased estimators via semidefinite programming
Format : Talk at Waseda University
Author(s) :
Matteo Croci (University of Texas at Austin)
Karen E. Willcox (University of Texas at Austin)
Stephen J. Wright (University of Wisconsin - Madison)
Abstract : Multifidelity forward uncertainty quantification (UQ) problems often involve multiple quantities of interest and heterogeneous models (e.g., different grids, equations, dimensions, physics, surrogate and reduced-order models). While computational efficiency is key in this context, multi-output strategies in multilevel/multifidelity methods are either sub-optimal or non-existent. In this talk we extend multilevel best linear unbiased estimators (MLBLUE) to multi-output forward UQ problems and we present new semidefinite programming formulations for their optimal setup. Not only do these formulations yield the optimal number of samples required, but also the optimal selection of low-fidelity models to use. While existing MLBLUE approaches are single-output only and require a non-trivial nonlinear optimization procedure, the new multi-output formulations can be solved reliably and efficiently. We demonstrate the efficacy of the new methods and formulations in practical UQ problems with model heterogeneity.
[03923] Simulating rare events with Stein variational gradient descent
Format : Talk at Waseda University
Author(s) :
Max Ehre (Technical University of Munich)
Iason Papaioannou (Technical University of Munich)
Daniel Straub (Technical University of Munich)
Abstract : Stein variational gradient descent (SVGD) is an approach to sampling from Bayesian posterior distributions. We repurpose SVGD for simulating rare events with probabilities 10^{-5} -- 10^{-12}. We employ a tempered version of SVGD to sample from an approximately optimal importance sampling density. Several examples are used to benchmark the efficacy of our approach against state-of-the-art methods for estimating rare event probabilities.