Abstract : In an inverse problem, we want to come up with a good description of a phenomenom from bad measurements. Industrial-scale inverse problems include, in particular, medical and biological imaging, structural health monitoring, and process monitoring. Generally the inverse problem takes the form of an ill-posed operator equation, linear or nonlinear. To solve such a problem, often the problem is given a variational formulation to which regularisation is added to promote desirable solution features. The solution of the inverse problem then becomes dependent on efficient optimisation methods. The talks in this minisymposium cover recent research in the area. They present general-purpose optimisation algorithms and numerical techniques, and the application of such methods to inverse problems.
[03137] Sparsity-promoting regularization for inverse problems via statistical learning
Format : Talk at Waseda University
Author(s) :
Luca Ratti (University of Bologna)
Giovanni S Alberti (University of Genoa)
Ernesto De Vito (University of Genoa)
Tapio Helin (Lappeenranta-Lahti University of Technology)
Matti Lassas (University of Helsinki)
Matteo Santacesaria (University of Genoa)
Abstract : In this talk, I will discuss a strategy, based on statistical learning, to design variational regularization functionals for ill-posed linear inverse problems. The proposed approach first restricts the choice to a parametric class of functionals and then searches for the optimal regularizer inside it, combining model-based and data-driven information. I will first recap the main results in the case of generalized Tikhonov functionals, and then focus on a class of sparsity-promotion regularizers.
[03391] Online Optimization for Dynamic Electrical Impedance Tomography
Format : Talk at Waseda University
Author(s) :
Jyrki Jauhiainen (University of Helsinki)
Tuomo Valkonen (Escuela Politécnica Nacional)
Neil Dizon (University of Helsinki)
Abstract : Online optimization generally studies the convergence of optimization methods as more data is introduced into the problem; think of deep learning as more training samples become available. We adapt the idea to dynamic inverse problems that naturally evolve in time. We introduce an improved primal-dual online method specifically suited to these problems, and demonstrate its performance on dynamic monitoring of electrical impedance tomography.
[03177] Primal-Dual Methods with Adjoint Mismatch
Format : Talk at Waseda University
Author(s) :
Felix Schneppe (Technische Universität Braunschweig)
Abstract : Primal-dual algorithms are widespread methods to solve saddle-point problems of the form $\min_x \max_y G(x) + \langle Ax, y \rangle - F^*(y).$ However, in practical applications like computed tomography the adjoint operator is often replaced by a computationally more efficient approximation. This leads to an adjoint mismatch in the algorithm.
In this talk, we analyse the convergence of different primal-dual algorithms and prove conditions, under which the existence of a solution can still be guaranteed.
[02036] Material decomposition in multi-energy X-Ray tomography with Inner Product Regularizer
Format : Online Talk on Zoom
Author(s) :
Salla Maaria Latva-Äijö (University of Helsinki)
Abstract : Dual-energy X-ray tomography is considered in a context where the target under imaging consists of two or more distinct materials. The materials are assumed to be possibly intertwined in space, but at any given location there is only one material present. Further, the same number of X-ray energies are chosen so that there is a clear difference in the spectral dependence of the attenuation coefficients of the materials.
A novel regularizer is presented for the inverse problem of reconstructing separate tomographic images for the two materials. A combination of two things, (a) non-negativity constraint, and (b) penalty term containing the inner product between the two material images, promotes the presence of at most one material in a given pixel. A preconditioned interior point method is derived for the minimization of the regularization functional.
Numerical tests with digital phantoms suggest that the new algorithm outperforms the baseline method, Joint Total Variation regularization, in terms of correctly material-characterized pixels. While the method is tested only in a two-dimensional setting with two materials and two energies, the approach readily generalizes to three dimensions and more materials. The number of materials just needs to match the number of energies used in imaging.
[05012] Multiscale hierarchical decomposition methods for ill-posed problems
Format : Talk at Waseda University
Author(s) :
Tobias Wolf (Klagenfurt University)
Elena Resmerita (University of Klagenfurt)
Stefan Kindermann (Johannes Kepler University Linz)
Abstract : The Multiscale Hierarchical Decomposition Method (MHDM) is a popular iterative method based on total variation minimization for mathematical imaging. We consider the method in a more general framework and expand existing results to the case when some classes of convex and nonconvex penalties are employed. Moreover, we discuss conditions under which the iterates of the MHDM agree with solutions of Tikhonov regularization corresponding to suitable regularization parameters. We illustrate our results with numerical examples.
[05199] Multiscale hierarchical decomposition methods for images corrupted by multiplicative noise
Format : Talk at Waseda University
Author(s) :
Elena Resmerita (University of Klagenfurt)
Joel Barnett (UCLA)
Wen Li (Fordham University)
Luminita Vese (UCLA)
Abstract : Recovering images corrupted by multiplicative noise is a well known challenging task. Mo-
tivated by the success of multiscale hierarchical decomposition methods (MHDM) in image
processing, we adapt a variety of both classical and new multiplicative noise removing models
to the MHDM form. Theoretical and numerical results show that the MHDM techniques are effective in several situations.
[03502] A Lifted Bregman Formulation for the Inversion of Deep Neural Networks
Format : Talk at Waseda University
Author(s) :
Xiaoyu Wang (University of Cambridge)
Martin Benning (Queen Mary University of London, London)
Abstract : We propose a novel framework for the regularised inversion of deep neural networks. The framework is based on the authors’ recent work on the lifted Bregman formulation on training feed-forward neural networks without the differentiation of activation functions. We propose a family of variational regularisations based on Bregman distances, present theoretical results and support their practical application with numerical examples. In particular, we present the first convergence result (to the best of our knowledge) for the regularised inversion of a single-layer perceptron that only assumes that the solution of the inverse problem is in the range of the regularisation operator, and that shows that the regularised inverse provably converges to the true inverse if measurement errors converge to zero.
[03398] Stable Phase retrieval with mirror descent
Myriam Zerrad (Aix-Marseille Univ, CNRS, Centrale Marseille, Institut Fresnel, Marseille)
Claude Amra (Aix-Marseille Univ, CNRS, Centrale Marseille, Institut Fresnel, Marseille)
Abstract : We aim to reconstruct an $n$-dimensional real vector from $m$ phaseless measurements corrupted by additive noise. We use the mirror descent (or Bregman gradient descent) algorithm to deal with noisy measurements and prove that the procedure is robust to (small enough) noise.