Abstract : The minisymposium aims at bridging the gap between low rank tensors and neural networks for learning
of high-dimensional functions, in particular in the context of uncertainty quantification.
The talks will highlight different aspects ranging from approximation to optimization.
The underlying motivation is to understand strengths and difficulties of network based
representations and to identify structures and techniques that can be combined beneficially.
[04419] Low-rank tensor approximation of high-dimensional functions
Format : Talk at Waseda University
Author(s) :
Helmut Harbrecht (University of Basel)
Michael Griebel (University of Bonn)
Reinhold Schneider (Technical University of Berlin)
Abstract : In this talk, we analyze tensor approximation schemes for high-dimensional functions in the continuous setting. To this end, we assume that the function to be approximated lies either in an isotropic Sobolev space or an anisotropic Sobolev space. We apply successively the truncated singular value decomposition in order to discuss the cost when approximating the
function under consideration in the continuous analogues of tensor formats such as the Tucker tensor format or the tensor
train format.
[03995] Parameter-dependent multigrid method using low-rank tensor formats
Format : Talk at Waseda University
Author(s) :
Tim Andreas Werthmann (RWTH Aachen University)
Lars Grasedyck (RWTH Aachen University)
Abstract : We consider a parameter-dependent linear system motivated by a diffusion problem.
The combination of all finitely many parameters leads to an exponential scaling of the computational effort in the number of parameters, the so-called curse of dimensionality.
To break this curse, we use low-rank tensor formats to represent this system.
We introduce the parameter-dependent multigrid method to solve such a high-dimensional system within low-rank tensor formats.
[04205] Alternating nonnegative factorizations for low-rank tensor formats
Format : Talk at Waseda University
Author(s) :
Maren Klever (RWTH Aaachen University)
Lars Grasedyck (RWTH Aachen)
Sebastian Kraemer (RWTH Aachen University)
Abstract : Low-rank tensor formats allow for efficient handling of high-dimensional objects.
If the quantity of interest is nonnegative, we want to preserve this property by constraining all cores to be nonnegative.
Common alternating strategies reduce the high-dimensional problem to a sequence of low-dimensional subproblems, but often suffer from slow convergence and persistence in local minima.
To reduce these inconveniences, we propose a new quasi-orthogonalization strategy as an intermediate step between the alternating minimization steps that preserves nonnegativity.
[04392] Tensor surrogates for sensitivity analysis in the presence of polymorphic uncertainties
Format : Talk at Waseda University
Author(s) :
Dieter Moser (RWTH Aachen University)
Abstract : Sensitivity analysis identifies the input parameters that have the greatest influence on model output. However, when input parameters are polymorphic, meaning that epistemic and aleatory uncertainty is present, traditional sensitivity analysis methods based on probabilistic modelling of the uncertainty have to be adapted.
A measurement of distances between polymorphic uncertainties is needed to measure the effect of the input parameters. In this talk, we will discuss how hierarchical tensor surrogates are beneficial for such an adapted sensitivity analysis.
[02980] Weighted sparse and low-rank least squares approximation
Format : Talk at Waseda University
Author(s) :
Philipp Trunschke (Nantes Université)
Martin Eigel (WIAS Berlin)
Anthony Nouy (Nantes Université)
Abstract : Many functions of interest exhibit weighted summability of their coefficients with respect to some dictionary of basis functions.
The resulting best $n$-term approximations can be estimated efficiently from samples.
We propose to encode the coefficients in a simultaneously sparse and low-rank tensor format to improve the efficiency of the algorithms performing this approximation.
Based on a weighted Stechkin lemma and the restricted isometry property, we provide approximation error and sample complexity bounds.
[04007] Iteratively Reweighted Least Squares Recovery on Tensor Networks
Format : Talk at Waseda University
Author(s) :
Sebastian Kraemer (RWTH Aachen University)
Abstract : A fundamental approach to tensor recovery traces back to affine rank minimization. We emphasize that the latter problem is always solved via asymptotic minimization of well-known log-det functions, in practice approachable through iteratively reweighted least squares. Additionally to local convergence properties, in numerical experiments, the theoretical phase transition for generic tensor recoverability becomes observable. Alternating optimization on tensor tree networks in turn allows to apply a relaxed method under minimal, polynomial complexity even in high dimensions.
[04663] Empirical Tensor Train Approximation in Optimal Control
Format : Talk at Waseda University
Author(s) :
Mathias Oster (TU Berlin)
Reinhold Schneider (TU Berlin)
Abstract : We display two approaches to solve finite horizon optimal control problems. First we solve
the Bellman equation numerically by employing the Policy Iteration
algorithm. Second, we introduce a semiglobal optimal con-
trol problem and use open loop methods on a feedback level. To overcome computational infeasability we use
tensor trains and multi-polynomials, together with
high-dimensional quadrature, e.g. Monte-Carlo. By controlling a destabilized
version of viscous Burgers and a diffusion equation with unstable reaction term
numerical evidence is given.
[05144] Dynamical low-rank approximation of Vlasov-Poisson equations on polygonal spatial domains
Format : Talk at Waseda University
Author(s) :
Andreas Zeiser (HTW Berlin)
André Uschmajew (University of Augsburg)
Abstract : We consider dynamical low-rank approximation (DLRA) for the numerical simulation of Vlasov-Poisson equations based on separation of space and velocity variables, as proposed in several recent works. A less studied aspect is the incorporation of boundary conditions in the DLRA model. We use a variational formulation of the projector splitting which allows to handle inflow boundary conditions on piecewise polygonal spatial domains. Numerical experiments demonstrate the principle feasibility of this approach.
[04645] Using Low-rank Tensor Formats in Neural Networks
Format : Talk at Waseda University
Author(s) :
Thong Pham Hoang Le (RWTH Aachen University)
Lars Grasedyck (RWTH Aachen)
Janina Enrica Schütte (WIAS Berlin)
Martin Eigel (WIAS Berlin)
Abstract : We investigate the use of low-rank tensor decompositions to improve the performance of neural network training. Specifically, we propose an approach that utilizes low-rank tensors to discretize the loss function of the neural network, which allows us to explore a larger parameter space than local methods such as Gradient Descent. Our approach could also facilitate improved weight initialization, further enhancing the network's performance.
[03998] Adaptive Multilevel Neural Networks for parametric PDEs with Error Estimation
Format : Talk at Waseda University
Author(s) :
Janina Enrica Schütte (WIAS Berlin)
Martin Eigel (WIAS)
Abstract : We focus on efficiently solving high dimensional, parameter-dependent partial differential equations. To approximate the parameter-to-solution map, different model classes have been considered, including low-rank tensor representations and neural network architectures.
In our work, a new multilevel neural network architecture is combined with an adaptive scheme including a solver based on a multilevel decomposition, classical reliable finite element error estimators, and a refinement strategy for the considered finite element grids. We show expressivity results and numerical experiments for the derived networks.
[00156] Adaptive Multilevel Neural Networks for parametric PDEs with Error Estimation
Abstract : Numerical methods for random parametric PDEs can greatly benefit from adaptive refinement schemes, in particular when functional approximations are computed as in stochastic Galerkin methods with residual based error estimation. In this talk we derive an adaptive refinement algorithm for an elliptic parametric PDE with unbounded lognormal diffusion coefficient steered by a reliable error estimator for both the spacial mesh and the stochastic space. Moreover, we will prove the convergence of the derived adaptive algorithm.