Registered Data

[02349] Deep Implicit and Explicit Models for Inverse Problems: Hybrid Data-Driven Models, Neural ODEs, PDEs and Beyond

  • Session Date & Time :
    • 02349 (1/3) : 3E (Aug.23, 17:40-19:20)
    • 02349 (2/3) : 4C (Aug.24, 13:20-15:00)
    • 02349 (3/3) : 4D (Aug.24, 15:30-17:10)
  • Type : Proposal of Minisymposium
  • Abstract : In this minisymposium, we will discuss the current developments of implicit and explicit deep learning models for inverse problems. Explicit deep learning models are based on stacking several discrete layers to solve a given downstream tasks. Another interesting perspective is implicit models, where one can specify the conditions to satisfy. Within this context our session will cover these two paradigms through new developments in Hybrid Models, Neural ODES – PDEs and Beyond. Moreover, discussing interesting real-world applications for a wide range of inverse problems.
  • Organizer(s) : Angelica Aviles-Rivero, Raymond H. Chan
  • Classification : 68T07, 65K10, 49N45
  • Speakers Info :
    • Noemie Debroux (Université Clermont Auvergne)
    • Nadja Gruber (University of Innsbruck)
    • Chun-Wun Cheng (City University of Hong Kong (CityU))
    • Rihuan Ke (University of Bristol)
    • Andrey Bryutkin (University of Cambridge)
    • Davide Bianchi (Harbin Institute of Technology)
    • Tieyong Zeng (The Chinese University of Hong Kong)
    • Ronald Lui (The Chinese University of Hong Kong)
  • Talks in Minisymposium :
    • [03249] Graph Laplacian and neural networks for inverse problems in imaging: graphLaNet
      • Author(s) :
        • Davide Bianchi (Harbin Institute of Technology (Shenzhen))
      • Abstract : In imaging problems, the graph Laplacian is a very effective regularization operator when a good approximation of the image to restore is available. This paper studies a Tikhonov method that embeds the graph Laplacian operator in an $\ell_1$--norm penalty term. The novelty is that the graph Laplacian is built upon a first approximation of the solution obtained as the output of a trained neural network. Theory and numerical examples demonstrate the efficacy of the method.
    • [03252] Spherical Image Inpainting with Frame Transformation and Data-driven Prior Deep Networks
      • Author(s) :
        • Tieyong Zeng (The Chinese University of Hong Kong)
      • Abstract : Spherical image processing has been widely applied in many important fields. In this talk, we focus on the challenging task of spherical image inpainting with deep learning-based regularizer. We employ a fast directional spherical Haar framelet transform and develop a novel optimization framework based on a sparsity assumption. Furthermore, by employing progressive encoder-decoder architecture, a new and better-performed deep CNN denoiser is carefully designed and works as an implicit regularizer. Finally, we use a plug-and-play method to handle the proposed optimization model, which can be implemented efficiently by training the CNN denoiser prior. Numerical experiments are conducted and show that the proposed algorithms can greatly recover damaged spherical images and achieve the best performance over purely using deep learning denoiser and plug-and-play model. This is a joint work with Jianfei Li, Chaoyan Huang, Raymond Chan, Han Feng, and Michael Ng.
    • [03463] Learning pair-wise homeomorphic image registration in a conformal-invariant hyperelastic setting
      • Author(s) :
        • Noémie DEBROUX (Université Clermont Auvergne)
        • Jing Zou (The Hong Kong Polytechnic University)
        • Lihao Liu (University of Cambridge)
        • Angelica Aviles-Rivero (University of Cambridge)
        • Jing Qin (The Hong Kong Polytechnic University)
        • Carola-Bibiane Schönlieb (University of Cambridge)
      • Abstract : Deformable image registration is a fundamental task in medical image analysis and plays a crucial role in a wide range of clinical applications. Recently, deep learning-based approaches have been widely studied for deformable medical image registration and achieved promising results. However, existing deep learning image registration techniques do not theoretically guarantee physically-meaningful transformations and usually require a lot of training data. In order to overcome these drawbacks, we propose a novel framework for pair-wise deformable image registration in a deep-learning framework. Firstly, we introduce a novel regulariser in the loss function based on conformal-invariant properties in a nonlinear elasticity setting. It theoretically guarantees that the obtained deformations are homeomorphisms and therefore preserve topology. Secondly, we boost the performance of our regulariser through coordinate MLPs, where one can view the to-be-registered images as continuously differentiable entities. We evaluate our model through extensive numerical experiments.
    • [03627] Continuous U-Net: Faster, Greater and Noiseless
      • Author(s) :
        • Chun-Wun Cheng (City University of Hong Kong)
        • Christina Runkel (University of Cambridge)
        • Lihao Liu (University of Cambridge)
        • Raymond Honfu Chan (City University of Hong Kong)
        • Carola-Bibiane Schönlieb (University of Cambridge)
        • Angelica Aviles-Rivero (University of Cambridge)
      • Abstract : Image segmentation is a fundamental task in image analysis and clinical practice. The current state-of-the-art techniques are based on U-shape type encoder-decoder networks with skip connections called U-Net. Despite the powerful performance reported by existing U-Net type networks, they suffer from several major limitations. These issues include the hard coding of the receptive field size, compromising the performance and computational cost, as well as the fact that they do not account for inherent noise in the data. They have problems associated with discrete layers, and do not offer any theoretical underpinning. In this work we introduce continuous U-Net, a novel family of networks for image segmentation. Firstly, continuous U-Net is a continuous deep neural network that introduces new dynamic blocks modelled by second order ordinary differential equations. Secondly, we provide theoretical guarantees for our network demonstrating faster convergence, higher robustness and less sensitivity to noise. Thirdly, we derive qualitative measures to tailor-made segmentation tasks. We demonstrate, through extensive numerical and visual results, that our model outperforms existing U-Net blocks for several medical image segmentation benchmarking datasets.
    • [04181] A learning framework for mapping problems via Quasiconformal geometry
      • Author(s) :
        • Ronald Lok Ming LUI (The Chinese University of Hong Kong)
      • Abstract : Many imaging problems can be formulated as a mapping problem. A general mapping problem aims to obtain an optimal mapping that minimizes an energy functional subject to the given constraints. Existing methods to solve the mapping problems are often inefficient and can sometimes get trapped in local minima. An extra challenge arises when the optimal mapping is required to be diffeomorphic. In this talk, we address the problem by proposing a deep-learning based framework based on the Quasiconformal (QC) Teichmüller theories. The main strategy is to learn the Beltrami coefficient (BC) that represents a mapping as the latent feature vector in the deep neural network. The BC measures the geometric distortions under the mapping. As such, the proposed network based on QC theories is explainable. Another crucial advantage of the proposed framework is that once the network is successfully trained, the optimized mapping corresponding to each input data information can be obtained in real time. In this talk, we will illustrate our framework by applying it to solve the diffeomorphic image registration problem. The developed network, called the quasiconformal registration network (QCRegNet), outperforms other state-of-the-art image registration models. This work is supported by HKRGC GRF (Project IDs: 14305919, 14306721,14307622).
    • [05328] Physics Informed Graph Transformer for PDEs
      • Author(s) :
        • Andrey Bryutkin (University of Cambridge)
        • Angelica Aviles-Rivero (University of Cambridge)
        • Jiahao Huang (Imperial College London)
      • Abstract : In recent years, robust PDE solvers have become increasingly important, necessitating more input variety. The physics-informed graph transformer (PhysGTN) uses graphs to solve underlying problems described on an irregular grid and combines multiple parameter inputs of the PDE. It applies a transformer network to learn specific resemblances of data and additional inputs, which the PDE provides. The architecture is designed to be discretization invariant and flexible enough to handle irregular meshes. The PhysGTN offers several advantages over traditional numerical methods, including increased computational efficiency, reduced time needed for obtaining solutions, and increased robustness to additional noise. This can lead to various challenges and applications for the setup.
    • [05441] Learning to solve inverse problems with unsupervised nonlinear models
      • Author(s) :
        • Rihuan Ke (University of Bristol)
        • Carola-Bibiane Schönlieb (University of Cambridge)
      • Abstract : Deep learning methods have recently demonstrated remarkable achievements in solving inverse problems. At the core of these methods lies the learning tasks of finding effective inverse problem solvers from a parameterised operator space, which is typically high dimensional. In the context of supervised learning, these learning tasks can be effectively tackled with sufficient supervised data, consisting of paired measurements and ground truth solutions. However, when the ground truth solutions are unknown, these learning tasks can be as challenging as solving the inverse problems themselves. In this talk, we present a hybrid method that addresses the learning tasks in an unsupervised learning setting for denoising and inverse problems more generally, where access to high-quality supervised data is restrictive or unavailable. We highlight a class of nonlinear operators that can be learned from noisy data and offer close approximations to the optimal solutions. Based on these nonlinear operators, we introduce a learning algorithm for solving inverse problems with limited knowledge of the underlying ground truth solutions and noise distributions.