Abstract : Geometry plays a paramount role in many aspects of data analysis and machine learning: Graphs on high-dimensional datasets encode interactions between geometry and data; Geometries on the space of probability measures give rise to new optimization and sampling algorithms; Geometric deep learning translates deep learning to new domains; Adversarial regularization of neural networks corresponds to geometric regularization. In this minisymposium we gather junior and senior researchers who have been driving the research in the field, using geometric methods for both analysis and algorithms. We aim at sparking new collaborations in this vibrant field and offering a platform for scientific exchange.
[03148] Geometric Data Analysis via Discrete Curvature
Format : Talk at Waseda University
Author(s) :
Melanie Weber (Harvard University)
Abstract : The problem of identifying geometric structure in heterogeneous, high-dimensional data is a cornerstone of Machine Learning. In this talk, we approach this problem from the perspective of Discrete Geometry. We begin by reviewing discrete notions of curvature, where we focus especially on discrete Ricci curvature. Then we consider a setting, where a given point cloud was sampled from an (unknown) manifold. We give pointwise consistency results for the discrete curvature of a geometric graph build from the point cloud and the curvature of the manifold. We further show that if the manifold has curvature bounded from below by a positive constant, the geometric graph will inherit this global structural property with high probability. Finally, we discuss applications of discrete curvature and our consistency results in Geometric Data Analysis, including graph-based clustering and regression. The talk is based on joint work with Nicolas Garcia Trillos, Zachary Lubberts and Yu Tian.
[02978] Large data limit of the MBO scheme for data clustering
Format : Online Talk on Zoom
Author(s) :
Jona Lelmi (University of Bonn)
Tim Laux (University of Bonn)
Abstract : The MBO scheme is a highly performant scheme used for data clustering. Given some data, one constructs the similarity graph associated with the data points. The goal is to split the data into meaningful clusters. The algorithm produces the clusters by alternating between diffusion on the graph and pointwise thresholding. In this talk, I will present the first theoretical studies of the scheme in the large data limit. We will see how the final state of the algorithm is asymptotically related to minimal surfaces in the data manifold and how the dynamic of the scheme is asymptotically related to the trajectory of steepest descent for surfaces, which is mean curvature flow. The tools employed are variational methods and viscosity solutions techniques. Based on joint work with Tim Laux (U Bonn).
[03210] Topologies of convergences for discrete-to-continuum limit on Poission point clouds
Format : Talk at Waseda University
Author(s) :
Marco Caroccia (Politecnico di Milano)
Abstract : Energies on point clouds have attracted increasing attention in the last decades, especially due to their application to machine learning and data analysis. What seems to arise from the collection of the results at several scales is that a change in the topology of Gamma-convergence occur when the point clouds is connected at a very short-range interaction scale. Typically, in literature, the interaction range considered is big enough to neglect the defects arising from the stochastic geometry of the point clouds. In the small range interaction regime instead the geometry of the point clouds
cannot be neglected in the analysis of the discrete-to-continuum limit. We will present the various topologies of convergence and the different techniques that are required to obtain the discrete-to-continuum limit at different scales.
[03219] Spectral Methods for Data Sets of Mixed Dimensions
Format : Talk at Waseda University
Author(s) :
Leon Bungert (Technical University of Berlin)
Dejan Slepcev (Carnegie Mellon University)
Abstract : High dimensional data often consist of parts with different intrinsic dimension. We study how spectral methods on graphs adapt to data containing intersecting pieces of different dimensions. We show that unnormalized Laplacian strongly prefer the highest dimension, while appropriately normalized Laplacian converges to Laplace-Beltrami operator in all dimensions simultaneously. For intersecting manifolds we identify when and how is the information transferred between manifolds.
[03228] An information geometric and optimal transport framework for Gaussian processes
Format : Talk at Waseda University
Author(s) :
Minh Ha Quang (RIKEN Center for Advanced Intelligence Project)
Abstract : Information geometry (IG) and Optimal transport (OT) have been attracting much research attention in various fields, in particular machine learning and statistics. In this talk, we present results on the generalization of IG and OT distances for finite-dimensional Gaussian measures to the setting of infinite-dimensional Gaussian measures and Gaussian processes. Our focus is on the Entropic Regularization of the 2-Wasserstein distance and the generalization of the Fisher-Rao distance and related quantities. In both settings, regularization leads to many desirable theoretical properties, including in particular dimension-independent convergence and sample complexity. The mathematical formulation involves the interplay of IG and OT with Gaussian processes and the methodology of reproducing kernel Hilbert spaces (RKHS). All of the presented formulations admit closed form expressions that can be efficiently computed and applied practically. The theoretical formulations will be illustrated with numerical experiments on Gaussian processes.
[03101] Data analysis and optimal transport: some statistical tools
Format : Talk at Waseda University
Author(s) :
Elsa Cazelles (CNRS, Université de Toulouse)
Abstract : We focus on the analysis of data that can be described by probability measures supported on a Euclidean space, through optimal transport. Our main objective is to present first and second order statistical analyses in the space of distributions, as a first approach to understand the general modes of variation of a set of observations. These studies correspond to the barycenter and the decomposition into geodesic principal components in the Wasserstein space.
[02871] Multispecies Optimal Transport and its Linearization
Format : Talk at Waseda University
Author(s) :
Katy Craig (Department of Mathematics at the University of California Santa Barbara)
Nicolás García Trillos (Department of Statistics at the University of Wisconsin Madison)
Dorde Nikolic (Department of Mathematics at the University of California Santa Barbara)
Abstract : The discovery of linear optimal transport by Wang et al. in 2013 improved the computational efficiency of optimal transport algorithms for grayscale image classification. Our main goal is to classify multicolor images arising in collider events. We will introduce the basics of linear optimal transport theory, and the multispecies distance. I will discuss similarities of the multispecies case with the Hellinger-Kantorovich distance, which was linearized in 2021 by Cai et al., via its Riemannian structure.
[03196] Semi-supervised learning with the p-Laplacian
Format : Talk at Waseda University
Author(s) :
Nadejda Drenska (Louisiana State University)
Jeff Calder (University of Minnesota, Twin Cities)
Abstract : Semi-supervised learning involves learning from both labeled and unlabeled data. In this talk we apply p-Laplacian regularization to cases of very low labeling rate; in such applications this approach classifies properly when the standard Laplacian regularization does not. Using the two-player stochastic game interpretation of the p-Laplacian, we prove asymptotic consistency of p-Laplacian regularized semi-supervised learning, thus justifying the utility of the p-Laplacian.
This is joint work with Jeff Calder.
[03217] The passive symmetries of machine learning
Format : Talk at Waseda University
Author(s) :
Soledad Villar (Johns Hopkins University)
Abstract : Any representation of data involves arbitrary investigator choices. Because those choices are external to the data-generating process, each choice leads to an exact symmetry, corresponding to the group of transformations that takes one possible representation to another. These are the passive symmetries; they include coordinate freedom, gauge symmetry and units covariance, all of which have led to important results in physics. Our goal is to understand the implications of passive symmetries for machine learning: Which passive symmetries play a role (e.g., permutation symmetry in graph neural networks)? What are dos and don'ts in machine learning practice? We assay conditions under which passive symmetries can be implemented as group equivariances. We also discuss links to causal modeling, and argue that the implementation of passive symmetries is particularly valuable when the goal of the learning problem is to generalize out of sample.
[03186] Graphons in Machine Learning
Format : Talk at Waseda University
Author(s) :
Luana Ruiz (Johns Hopkins University)
Abstract : Graph neural networks are successful at learning representations from graph data but suffer from limitations in large graphs. Yet, large graphs can be identified as being similar to each other in the sense that they share structural properties. Indeed, graphs can be grouped in families converging to a common graph limit--- the graphon. In this talk, I discuss how graphons can be used to lay the theoretical foundations for machine learning on large-scale graphs.
[03178] Graphon Analysis of Graph Neural Networks
Format : Talk at Waseda University
Author(s) :
Ron Levie (Technion - Israel Institute of Technology)
Abstract : In recent years, graph neural networks have led to ground-breaking achievements in the applied sciences and industry. These achievements pose exciting theoretical challenges: can the success of graph neural networks (GNNs) be grounded in solid mathematical frameworks?
In this talk, I will show how to define GNN input domains using graphon analysis, and how such domains lead to a universal analysis of GNNs, with generalization bounds and approximation theorems.
[03202] Graph Neural Networks on Large Random Graphs: Convergence, Stability, Universality
Format : Talk at Waseda University
Author(s) :
Nicolas Keriven (CNRS, IRISA)
Abstract : In this talk, we will discuss some theoretical properties of GNNs on large graphs. We assume that the graphs are generated with classical models of random graphs. We characterize the convergence of GNNs as the number of nodes grows. We study their stability to small deformations of the underlying model, a crucial property in traditional CNNs. Finally, we study their approximation power, and show how some recent GNNs are more powerful than others.