Abstract : Many applications, such as computational finance, uncertainty quantification involving PDEs with random inputs, and training of deep neural networks, require tackling computational problems with high dimensions. This minisymposium will bring together people working in Quasi-Monte Carlo (QMC) methods, a powerful class of methods for such problems with high dimensionality. More specifically, QMC methods have been proved efficient for integration over the multi-dimensional unit cube, over other domains, function approximation, and density estimation. In this minisymposium, we aim to showcase recent advances in QMC methods and foster interaction between researchers working in these areas.
[04989] The fast reduced QMC matrix-vector product
Format : Talk at Waseda University
Author(s) :
Josef Dick (UNSW)
Adrian Ebert (JKU Linz)
Lukas Herrmann (RICAM Linz)
Peter Kritzer (RICAM Linz)
Marcello Longo (ETH Zurich)
Abstract : We study the approximation of integrals of the form $\int_D f(\boldsymbol{x}^\top A) \,d \mu(\boldsymbol{x})$, where $A$ is a matrix, by quasi-Monte Carlo (QMC) rules $N^{-1} \sum_{k=0}^{N-1} f(\boldsymbol{x}_k^\top A)$. We are interested in cases where the main computational cost in computing the approximation arises from the computation of $\boldsymbol{x}_k^\top A$. We design QMC rules for which the computation of $\boldsymbol{x}_k^\top A$, $k = 0, 1, \ldots, N-1$ can be done in a fast way.
[05066] Recent Advances on discrepancy and WCE of constructible point sets on spheres
Format : Talk at Waseda University
Author(s) :
Johann S. Brauchart (Graz University of Technology)
Abstract : We discuss bounds for the $L_2$-discrepancy and worst-case error for equal-weight quasi Monte Carlo rules (and compare them with optimal bounds) for constructible $N$-point sets that arise from mapping rational lattice points in the plane to the sphere using an area preserving Lambert transformation. Our standard examples are spherical Fibonacci point configurations defined by a Fibonacci lattice in the unit square.
This is joint work with Josef Dick (UNSW) and Yuan Xu (University of Oregon).
[03448] Quasi-Monte Carlo approach to Bayesian optimal experimental design
Format : Talk at Waseda University
Author(s) :
Vesa Kaarnioja (Free University of Berlin)
Abstract : The goal in Bayesian optimal experimental design (OED) is to maximize the expected information gain for the reconstruction of unknown quantities given a limited budget for collecting measurement data. Quasi-Monte Carlo (QMC) methods have been demonstrated to be effective for numerical treatment of partial differential equations (PDEs) involving input uncertainties in recent years. In this talk, we derive tailored QMC cubature rules to reduce the computational burden in Bayesian OED problems governed by PDEs.
[04877] Density estimation by Monte Carlo and quasi-Monte Carlo
Format : Talk at Waseda University
Author(s) :
Pierre L'Ecuyer (University of Montreal)
Abstract : Estimating the density of a continuous random variable X has been studied extensively in settings where n independent observations of X are given a priori. Popular methods include histograms and kernel density estimators. These methods have bias and their mean square error converges at a slower rate than the usual O(1/n) rate. In this talk, we consider the situation where the observations are generated by Monte Carlo simulation from a model. Unbiased estimators and better convergence rates can then be obtained, sometimes much better than O(1/n). We show how this can be achieved, using techniques such as conditional Monte Carlo, likelihood ratio methods for derivative estimation, and randomized quasi-Monte Carlo. Theoretical and empirical results will be given. This is based on joint work with Amal Ben Abdellah and Florian Puchhammer.
[03343] High dimensional approximation and the curse of dimensionality
Format : Talk at Waseda University
Author(s) :
Ian Hugh Sloan (UNSW Sydney)
Abstract : This talk, joint work with Kuo and Kaarnioja, extends a 2022 result with Kazashi and Nobile. It uses periodic kernels located at lattice points. The lattice points and the kernels depend on parameters called “weights”. Using new “serendipitous” weights, the cost grows linearly with both dimension and number of lattice points, allowing practical computations in 1,000 dimensions, and no curse of dimensionality.
[01697] QMC and sparse grids beyond uniform distributions on cubes
Format : Talk at Waseda University
Author(s) :
Ilja Klebanov (Free University of Berlin)
Tim Sullivan (University of Warwick)
Abstract : While Monte Carlo and MCMC methods are generally applicable and have a dimension-independent convergence rate, this rate is rather slow and unfeasible for many applications. Sparse grids and Quasi Monte Carlo methods provide better convergence rates under certain assumptions, but have only been constructed for uniform distributions on cubes and several other very specific distributions such as Gaussians. In this talk, I will show how these methods can be generalized to mixtures of such specific distributions, e.g. Gaussian mixtures, by means of a properly constructed transport map, which is a crucial step towards combining QMC and sparse grids methods with state of the art importance sampling algorithms, that are often based on such mixtures. I will avoid technical details and present lots of illustrations and videos instead.
[02052] Can hyperinterpolation part with quadrature exactness?
Format : Online Talk on Zoom
Author(s) :
Congpei An (Southwestern University of Finance and Economics)
Hao-Ning Wu (The Hong Kong University)
Abstract : We discuss the approximation of continuous functions on the unit sphere by spherical polynomials of degree n via hyperinterpolation. Hyperinterpolation of degree n is a discrete approximation of the L2-orthogonal projection of degree n with its Fourier coefficients evaluated by a positive-weight quadrature rule that exactly integrates all spherical polynomials of degree at most 2n. This talk aims to bypass this quadrature exactness assumption by replacing it with the Marcinkiewicz--Zygmund property. Consequently, hyperinterpolation can be constructed by a positive-weight quadrature rule--not necessarily with quadrature exactness. This scheme is called unfettered hyperinterpolation. We provide a reasonable error estimate for unfettered hyperinterpolation. The error estimate generally consists of two terms: a term representing the error estimate of the original hyperinterpolation of full quadrature exactness and another introduced as compensation for the loss of exactness degrees. A guide to controlling the newly introduced term in practice is provided. In particular, if the quadrature points form a quasi-Monte Carlo design, then there is a refined error estimate. Numerical experiments verify the error estimates and the practical guide.
[04553] Quasi-Monte Carlo-Based Algorithms for Deep Learning with Applications
Format : Online Talk on Zoom
Author(s) :
Xiaoqun Wang (Tsinghua University)
Abstract : Deep learning methods are now used to solve (stochastic) partial differential equations in high dimensions, where the loss functions are defined as mathematical expectations. Traditional method to approximate the expectation is Monte Carlo (MC) method. We propose novel deep learning algorithms based on quasi-Monte Carlo (QMC) method, which is a deterministic version of MC method. We prove that the theoretical convergence order of QMC-based deep learning algorithms is asymptotically higher than that of MC-based algorithms. Numerical experiments demonstrate the substantial superiority of QMC-based algorithms in various applications.
[04614] Lattice rules for integration on $\mathbf{R}^d$
Format : Talk at Waseda University
Author(s) :
Dirk Nuyens (KU Leuven)
Abstract : For integration of periodic functions of mixed smoothness on the unit cube lattice rules are a very good choice. Their group theoretical structure allows for a convenient error analysis.
When integrating functions on $\mathbb{R}^d$ there are multiple ways of still making use of lattice rules. We could truncate the integration region, and take into account the truncation error and the projection distance to the periodic subspace; or we could do a substitution and map the integrand to the unit cube. Depending on how we proceed it is possible to retain the smoothness and obtain higher order convergence with a lattice rule.
[04320] SNPE-B Revisited: Rethinking of Data Efficiency and Variance Reduction
Format : Talk at Waseda University
Author(s) :
Zhijian He (South China University of Technology)
Yifei Xiong (University of Chinese Academy of Sciences)
Xiliang Yang (South China University of Technology)
Abstract : Sequential neural posterior estimation (SNPE) techniques have recently proposed for dealing with simulation-based models with intractable likelihoods. Unlike approximate Bayesian computation, SNPE techniques learn the posterior from sequential simulation using neural network-based conditional density estimators. This paper reclaims the efficiency of SNPE-B proposed by Lueckmann et al. (2017) through sophisticated sampling strategies. We firstly introduce a concentrated loss function based on an adaptive calibration kernel that reweights the simulated data appropriately to improve the data efficiency. We also provide a theoretical analysis of the variance of importance sampling-based estimators. We then propose several variance reduction techniques including quasi-Monte Carlo sampling to further accelerate the process of learning. Numerical experiments demonstrate that our method outperforms the original SNPE-B method together with other existing competitors on certain tasks.
[04916] Variable importance measures in high dimensional data sets
Format : Talk at Waseda University
Author(s) :
Art B Owen (Stanford University)
Abstract : This talk connects some ideas from QMC sampling to new problems in explainable AI. Variable importance measures in global sensitivity analysis are commonly derived from the ANOVA decomposition. The ANOVA is problematic with dependent variables. Then Shapley values and related quantities are used to measure importance, and those can be studied through the anchored decomposition, also common in quasi-Monte Carlo sampling.
Based on work with Ben Seiler, Christopher Hoyt, Masayoshi Mase and Naofumi Hama.
[03703] Tuning QMC point sets using WAFOM-like value
Format : Talk at Waseda University
Author(s) :
Makoto Matsumoto (Hiroshima University)
Abstract : There are many figures of merit for QMC point set P. The star discrepancy D^*(P) is a famous one. We restrict ourself to the case of 2-adic digital nets. Then, P can be considered as an F2-vector space, where F2 is the two-element field. Let P' be the perpendicular space of P. Then, P' is considered as the space of linear relations satisfied by P. As a rough sketch, P is more uniform if any non-zero element A in P' is sufficiently complicated. One way to measure the complexity of A is Niederreiter-Rosenbloom-Tsfasman (NRT) weight; the smallest NRT weight among all possible non-zero weights gives the t-value. As other figures of merits, we may consider Dick weight mu_alpha(A) of A in P'. Summation of 2^{-mu_alpha(A)} over nonzero A in P' gives figures of merit studied by Dick, and by considering the limit where alpha goes to infinity, we obtain WAFOM. It is used by Harase that NRT weight is invariant by lower triangular trransformation, but WAFOM differs, and so one can obtain better performance for smooth integrand when WAFOM is optimized while t-value is fixed. We study a justification of this method, and variation where Dick weight is replaced with NRT weight.