Special Lectures: Schedule and Abstracts

Updated: 8:31, Aug. 22, 2023

The Olga Taussky-Todd Lecture

The Olga Taussky-Todd Lecture is given at the Okuma Auditorium (Bldg. H on the MAP), after the Opening Celemony with the same place.

Aug. 21 (Mon.): 11:35 – 12:20
The Olga Taussky-Todd Lecture [Online] (Chair: Siegfried M. Rump)
IIse C.F. Ipsen, North Carolina State University, USA
An Introduction to Randomized Matrix Computations

ICIAM Prize Lectures

All ICIAM Prize Lectures are given in Room D201 with Live-streaming Rooms D101 / D102. To find Bldg. D, see the MAP.

Aug. 22 (Tue.): 10:40 – 11:25
ICIAM Maxwell Prize Lecture [On-site] (Chair: Gang Bao)
Weinan E, AI for Science Institute, Beijing / Peking University, China
AI for Science

Aug. 22 (Tue.): 11:35 – 12:20
ICIAM Collatz Prize Lecture [Online] (Chair: Kim-Chuan Toh)
Maria Colombo, École polytechnique fédérale de Lausanne, Switzerland
Anomalous Dissipation in Fluid Dynamics

Aug. 23 (Wed.): 10:40 – 11:25
ICIAM Pioneer Prize Lecture [On-site] (Chair: Jin Keun Seo)
Leslie Greengard, New York University, USA
Adaptive, Multilevel, Fourier-based Fast Transforms

Aug. 23 (Wed.): 11:35 – 12:20
ICIAM Industry Prize Lecture [On-site] (Chair: Volker Mehrmann)
Cleve B. Moler, The MathWorks, USA
Exploring Matrices

Aug. 24 (Thr.): 10:40 – 11:25
ICIAM Lagrange Prize Lecture [On-site] (Chair: Leah Edelstein-Keshet)
Alfio Quarteroni, Polytechnic University of Milan, Italy
The Pulse of Math

Aug. 24 (Thr.): 11:35 – 12:20
ICIAM Su Buchin Prize Lecture [On-site] (Chair: Tanniemola B. Liverpool)
Jose Mario Martinez Perez, Brazilian Academy of Sciences, Brazil
Sequential Model Simplifications and Applications

ICIAM Invited Lectures

Aug. 22 (Tue.): 8:30 – 9:15 (Three in Parallel)

Aug. 22 (Tue.): 9:25 – 10:10 (Three in Parallel)

Aug. 23 (Wed.): 8:30 – 9:15 (Three in Parallel)

Aug. 23 (Wed.): 9:25 – 10:10 (Three in Parallel)

Aug. 24 (Thr.): 8:30 – 9:15 (Three in Parallel)

Aug. 24 (Thr.): 9:25 – 10:10 (Three in Parallel)

Aug. 24 (Thr.): 19:45 – 20:30 (Three in Parallel)

Aug. 25 (Fri.): 8:30 – 9:15 (Three in Parallel)

Aug. 25 (Fri.): 9:25 – 10:10 (Three in Parallel)

SIAM Prize Lectures

All SIAM Prize Lectures are given in Room D201 with Live-streaming Rooms D101 / D102.

Aug. 21 (Mon.): 19:45 – 20:30
SIAM Peter Henrici Lecture [On-site]
Douglas N. Arnold, University of Minnesota, USA
What the @#$! is Cohomology Doing in Numerical Analysis?!

Aug. 22 (Tue.): 19:45 – 20:30
SIAM John von Neumann Lecture [On-site]
Yousef Saad, University of Minnesota, USA
Iterative Linear Algebra for Large Scale Computations

Aug. 23 (Wed.): 19:45 – 20:30
AWM-SIAM Sonia Kovalevsky Lecture [Online]
Annalisa Buffa, École polytechnique fédérale de Lausanne, Switzerland
Simulation of PDEs on Geometries Obtained via Boolean Operations

Abstracts of Special Lectures

The Olga Taussky-Todd Lecture

IIse C.F. Ipsen, North Carolina State University, USA
An Introduction to Randomized Matrix Computations
Abstract:
We present a `user-friendly’ introduction to randomized matrix algorithms, with several case studies that focus on the ideas and intuition behind randomization.

The concerted development of randomized matrix algorithms started in the theoretical Computer Science community in the nineteen-nineties. At first a purely theoretical enterprise, these algorithms have become practical to the point that they are being used by domain scientists; and general purpose software libraries, such as RandBLAS and RandLAPACK, are under development.

Many randomized matrix algorithms reduce the problem dimension by replacing the original matrix with a lower-dimensional ‘sketch’. We illustrate this on the basic problem of matrix multiplication, and on the solution of least squares/regression problems. Along the way, we discuss sampling modalities (data-aware, oblivious), developments in high-dimensional probability (matrix concentration inequalities, matrix coherence), numerical issues (problem conditioning), and the analysis of the error due to randomization.

This talk draws on work with current and former students Jocelyn Chi, John Holodnak, Arnel Smith, and Thomas Wentworth.

ICIAM Maxwell Prize Lecture

Weinan E, AI for Science Institute, Beijing / Peking University, China
AI for Science
Abstract:
For many years, the lack of good algorithms has severely limited our ability to conduct scientific research. At its heart, the difficulty comes from the notorious “curse of dimensionality” problem. Deep learning is exactly the kind of tool needed to address this problem. In the last few years, we have seen a tremendous amount of scientific progress made as a result of the AI revolution, both in our ability to make use of the fundamental principles of physics, and our ability to make use of experimental data.

In this talk, I will start with the origin of the AI for Science revolution, review some of the major progresses made so far, and discuss how it will impact the way we do scientific research. I will also discuss how AI for Science might impact applied mathematics.

ICIAM Collatz Prize Lecture

Maria Colombo, École polytechnique fédérale de Lausanne, Switzerland
Anomalous Dissipation in Fluid Dynamics
Abstract:
Kolmogorov’s K41 theory of turbulence advances quantitative predictions on anomalous dissipation in incompressible fluids. This phenomenon can be described as follows: although smooth solutions of the Euler equations conserve the kinetic energy, in turbulent fluids the energy can be transferred to high frequencies and anomalously dissipated. Hence turbulent solutions of the Navier-Stokes equations are expected to converge, in the vanishing viscosity limit, to irregular solutions of the Euler equations, with decreasing kinetic energy.

In rigorous analytical terms, however, this phenomenon is little understood. In this talk, I will present the recent developments on this topic and focus on a joint work with G. Crippa and M. Sorella which considers the case of passive-scalar advection, where anomalous dissipation is predicted by the Obukhov-Corrsin theory of scalar turbulence. I will discuss the construction of a velocity field and a passive scalar exhibiting anomalous dissipation in the supercritical Obukhov-Corrsin regularity regime. The techniques developed in this context allow also to answer the question of (lack of) selection for passive-scalar advection under vanishing diffusivity. Finally, I will present a joint work with E. Brue’, G. Crippa, C. De Lellis, and M. Sorella, where we use the previous construction to give example of anomalous dissipation for the forced Navier-Stokes equations in the supercritical Onsager regularity regime.

ICIAM Pioneer Prize Lecture

Leslie Greengard, New York University, USA
Adaptive, Multilevel, Fourier-based Fast Transforms
Abstract:
The last few decades have seen the development of a variety of fast algorithms for computing convolutional transforms – that is, evaluating the fields induced by a collection of sources at a collection of targets, with an interaction specified by some radial function (such as the 1/r kernel of gravitation or electrostatics).

The earliest such scheme was Ewald summation, which relies on Fourier analysis for its performance and is best suited for uniform distributions of sources and targets. To overcome this limitation, approximation-theory based algorithms emerged, which organized the sources and targets on an adaptive tree data structure. By carefully separating source and target clusters at each length scale in the spatial hierarchy, linear scaling methods were developed to compute all pairwise interactions in linear time, more or less independent of the statistics of the distribution of points. (The fast multipole method is one such scheme.)

In this talk, we introduce a new class of methods for computing fast transforms that can be applied to a broad class of kernels, from the Green’s functions for constant coefficient partial differential equations to power functions and radial basis functions such as those used in statistics and machine learning. The DMK (dual-space multilevel kernel-splitting) framework combines features from fast multipole methods, Ewald summation, multilevel summation methods and asymptotic analysis to achieve speeds comparable to the FFT in work per gridpoint, even in a fully adaptive context.We will discuss both the algorithm and some of its applications to physical modeling in complex geometry. This is joint work with Shidong Jiang.

ICIAM Industry Prize Lecture

Cleve B. Moler, The MathWorks, USA
Exploring Matrices
Abstract:
An evolving collection of short videos and interactive MATLAB software that supplements courses in linear algebra and computational science. Topics include rotation matrices, Rubik’s cubes, computer graphics, Simulink models of vehicle dynamics, and AI for facial recognition of gorillas.

ICIAM Lagrange Prize Lecture

Alfio Quarteroni, Polytechnic University of Milan, Italy
The Pulse of Math
Abstract:
Computational medicine represents a formidable generator of mathematical problems and numerical methods that enable a deeper understanding of human physiology and provide crucial support to physicians for more accurate diagnoses, optimized therapies, and patient-specific surgical interventions.

The inherent difficulties associated with the multiphysics and multiscale nature of the problems at hand, data uncertainty, inter- and intra-patient variability, and the curse of dimensionality, can be overcome thanks to the development of accurate, physics-based models empowered with data-driven artificial intelligence algorithms.

In this presentation, we will show how the iHEART simulator, an integrated model of the human heart function, enables us to achieve these objectives for the first time, and discuss its future developments.

ICIAM Su Buchin Prize Lecture

Jose Mario Martinez Perez, Brazilian Academy of Sciences, Brazil
Sequential Model Simplifications and Applications
Abstract:
We discuss the process of simplification or “complexification” of problems. We present a model algorithm that simplifies or complicate problems based on rational criteria. The algorithm is inspired in the scheme of inexact restoration of introduced originally for solving constrained optimization problems. We present the mathematical characteristics of the algorithm in terms of convergence and complexity. Examples will be given concerning the prediction of river flows.

ICIAM Invited Lectures

Albert Cohen, Sorbonne Université, France
From Linear to Nonlinear Reduced Modeling : Theory and Algorithms
Abstract:
Reduced modeling techniques are of important use for tackling forward simulation and inverse problems, in the context of parametrized PDE’s. We shall first review concepts and algorithms that are relevant to linear reduced modeling : Kolmogorov width and reduced bases, proper orthogonal decompositions… We shall then discuss various strategies that aim at developing similar tools for nonlinear reduced modeling which appears to be beneficial in various applications.

Yasuaki Hiraoka, Kyoto University, Japan
Persistent Homology from Viewpoints of Representation, Probability, and Application
Abstract:
Topological data analysis (TDA) is an emerging concept in applied mathematics, by which we can characterize shapes of massive and complex data using topological methods. In particular, the persistent homology and persistence diagrams are nowadays applied to a wide variety of scientific and engineering problems. In this talk, I will survey our recent research on persistent homology from three interrelated perspectives; quiver representation theory, random topology, and applications on materials science. First, on the subject of quiver representation theory, I will talk about our recent challenges to develop a theory of multiparameter persistent homology on commutative ladders. By applying interval decompositions/approximations on multiparameter persistent homology (Asashiba et al, 2022) to our setting, I will introduce a new concept called connected persistence diagrams, which properly possess information of multiparameter persistence, and show some properties of connected persistence diagrams. Next, about random topology, I will show our recent results on limit theorems (law of large numbers, central limit theorem, and large deviation principles) of persistent Betti numbers and persistence diagrams defined on several stochastic models such as random cubical sets and random point processes in a Euclidean space. Furthermore, I will also explain a preliminary work on how random topology can contribute to understand the decomposition of multiparameter persistent homology discussed in the first part. Finally, about applications, I will explain our recent activity on materials TDA project. By applying several new mathematical tools introduced above, we can explicitly characterize significant geometric and topological hierarchical features embedded in the materials (glass, granular systems, iron ore sinters etc), which are practically important for controlling materials functions.

Rachel Ward, The University of Texas at Austin, USA
Stochastic Gradient Descent: Understanding Adaptive Step-sizes, Momentum, and Random Initialization
Abstract:
Stochastic gradient descent (SGD) is the foundational algorithm used in machine learning optimization, but several algorithmic modifications to the basic SGD algorithm are often needed to make it “work” on high-dimensional non-convex problems. Three of the crucial modifications are: adaptive step-size updates, momentum, and careful random initialization of the parameters. This talk will discuss recent theoretical insights towards understanding why adaptivity, momentum, and careful random initialization are so powerful in practice. In particular, the theory unveils a novel but simple initialization method for gradient descent on matrix- and tensor-factorization problems; with this initialization, we prove that gradient descent discovers optimal low-rank matrix and tensor factorizations in a small number of steps.

Gary Froyland, The University of New South Wales, Australia
Spectral Approaches to Complex Dynamics
Abstract:
The weather and the climate, along with social processes, biological processes, and engineering processes, are all dynamical systems because they are governed by a set of micro-rules at the level of individual states that describe how processes evolve over time. Complex dynamical systems incorporate elements of unpredictability (nearby initial states quickly diverge from one another) and emergence (macroscopic system behaviour is not apparent from the set of micro-rules due to many interacting components).

Emergent behaviour in complex dynamical systems is typically connected with the appearance of macro-structures that persist for a certain amount of time. Examples of such macro-structures include eddies in the ocean, cyclonic storms or heatwaves in the atmosphere, and the coalescing of societal opinion around a particular issue. It is these macro-phenomena that impact our daily world, but it remains challenging to identify key organising emergent phenomena from micro-rules or spatiotemporal observations. Part of the difficulty arises from the usually complicated nonlinear dynamics of the micro-rules. To access these macro-phenomena, we use a linear operator induced by the micro-dynamics. This linear operator – the transfer operator – acts on ensembles of individual states, and its linearity enables one to access a huge toolbox from linear analysis.

In this talk, we gently introduce the transfer operator and its associated spectral theory. We also make connections to a dynamic spectral geometry of evolving manifolds, centred on the dynamic Laplace operator. Several example analyses are presented, ranging across problems in climate, physical oceanography, and atmospheric science.

Satoru Iwata, University of Tokyo / Hokkaido University, Japan
Mathematical Approaches Towards Design and Discovery of Chemical Reactions
Abstract:
Recent developments in computational quantum chemistry enable us to explore the potential energy surface and extract chemical reaction networks. These methods are highly relevant to continuous optimization. To utilize this advancement for design and discover of chemical reactions, one naturally requires methods in discrete optimization and machine learning.

Given a configuration of atoms, one can determine the energy of the ground state via quantum chemistry calculation. This function, called the potential energy surface (PES), provides rich information on the structure of molecules. A local minimum of the PES corresponds to an equilibrium state, while a saddle point corresponds to a transition state. An elementary process of a chemical reaction can be understood as a move from one equilibrium state to another one via a transition state. Chemical reactions are often realized as a sequence of such moves, namely as a path in a network that consists of equilibrium states as vertices and transition states as edges. Such a network is called a chemical reaction network.

It is highly nontrivial, however, to extract the chemical reaction network unless the target system is very small. Ohno and Maeda (2004) developed a systematics method, called the ADDF (anharmonic downward distortion following) method, to explore the potential energy surface from one equilibrium state to a transition state. Subsequently, Maeda and Morokuma (2011) devised a different algorithm called the AFIR (artificial force inducing reaction) method based on the technique of incurring artificial force that pushes certain part of the system to a certain direction. The AFIR method is applicable to various situations including a reaction of type A + B → X.

These two methods have enabled us to extract chemical reaction networks. To utilize these methods for design and discovery of chemical reactions, however, there are still several issues to be resolved.

One example is how to select diverse molecules from unexplored areas of chemical space. We have so far developed a new approach for selecting a subset of diverse molecules from a given molecular list by combining two existing techniques in machine learning and discrete optimization: graph neural networks (GNNs) for learning vector representation of molecules and a diverse-selection framework by submodular function maximization.

Another issue is how to predict the yield of the target product from the chemical reaction network. This can be done by analyzing the kinetics of reactions that are modeled with large-scale highly stiff master equations, which is hard to solve numerically. A heuristic approach called the rate constant matrix contraction (RCMC) method has been developed. We will discuss theoretical property and efficient implementation of this method, which turns out to have a close relation to the greedy algorithm for submodular function maximization.

This talk is based on a current research project in collaboration with Satoshi Maeda of Hokkaido University supported by JST ERATO.

Gitta Kutyniok, LMU Munich, Germany
The Mathematics of Reliable AI
Abstract:
The new wave of artificial intelligence is impacting industry, public life, and the sciences in an unprecedented manner. In industrial and applied mathematics, it has by now already led to paradigm changes in several areas. However, one current major drawback is the lack of reliability.

The goal of this lecture is to first provide an introduction into this new vibrant research area. We will then survey recent advances, in particular, concerning performance guarantees and explainability methods for artificial intelligence, which are key to ensure reliability. Finally, we will discuss fundamental limitations in terms of computability, which seriously affect diverse aspects of reliability, and reveal a surprising connection to novel computing approaches such as neuromorphic computing and quantum computing.

Ichiro Hagiwara, Meiji University, Japan
A Consideration of Scientific – Technical Aspects and Artistic Aspect of Origami Engineering – Aiming to Create a New Big Industry and a New Fan Culture
Abstract:
Interests in current science and technology span both micro and macro extremes with remarkable progress in measurement technology. Because there are no manufacturing equipments for huge or microstructures, the structures themselves must double as manufacturing devices. Therefore, foldable and deployable origamble paper. It is inevitable that origami is born and nurtured in Japan. Thanks to Japi structures become more and more important. Washi is virtually the world’s first foldaan’s isolation during the Edo period and a peaceful and prosperous era, arts such as haiku, kabuki, and Noh peculiar to Japan were nurtured. Origami which is oe onf these arts has long been a precious art and was not considered to be a target for money. But British engineers developed a mass production method for honeycombs, inspired by Japanese Tanabata decorations which developed into a trillion yen industry. This fact inspired us to start origami engineering in 2003. Dr. Nojima brought in traditional Japanese paper cutting “Kirigami” and created the concept of curved honeycombs in 2002 which cannot be made using the existing British manufacturing method. Kirigami has also become an international language like origami and is now a significant field of origami engineering because kirigami honeycomb is more useful than origami structure for the above mentioned macro and micro structures. And now, let me show you a more advanced honeycomb. It has been challenging to build arbitrary shapes with a single connection. Still, through our research, we have successfully built an arbitrary shape structure with a single honeycomb and with a robot Moreover the cubic core developed by us from the origsmi-core because it is superior to the honeycomb core which has become a multi-trillion dollar industry owing to its maximum bending stiffness per weight. By the way, in addition to the above scientific and technical aspects, origami engineering has also an artistic aspect. I will touch on art of fan which is positioned as a bellows fold with bamboo bones inserted into the paper. Fan is a fusion of art and function which is the starting point of Japanese manufacturing and it was born in Japan over 1200 years ago. It is not well known that fan was elevated as a three-dimensional art unlike any other in the world thorough the hard work of great artists, such as Hokusai Katsushika and Sodatsu Tawaraya et al. in the Edo period because many fans have been reshaped as flat paintings, with the bones pulled out and edges cut off to remove time-related deterioration. Fan was originally aimed at storytelling because it can show the intersection of viewpoints and the direction in which the characters move using the crease. But it was difficult because the picture on the fan face appears different from the original flat painting, owing to the effects of folding and distortion depending on the fan shape. Here, the mathematical method is developed so that we can attempt to develop a new art that links Fan with Haiku and Waka by combining the mathematics and image processing technology.

Tamara Kolda, MathSci.ai, USA
Randomized Algorithms for Tensor Decomposition
Abstract:
Tensors are ubiquitous in modern-day computational and data sciences. Tensor decomposition is a technique for breaking down a tensor into simpler components, akin to matrix factorization. Applications of tensor decomposition are ubiquitous in machine learning, signal processing, chemometrics, neuroscience, quantum computing, financial analysis, social science, business market analysis, image processing, and much more. Tensors are growing ever larger, motivating a need for robust and efficient computational methods that can handle massive tensor datasets. We discuss several examples of randomized algorithms for tensor decomposition. Tensor decomposition problems have special sructure that yields majors advantages in the application of randomization. We consider random methods such as randomized range finder for Tucker decomposition, matrix sketching using leverage scores for canonical polyadic (CP) decomposition, and biased sampling for stochastic gradient descent for generalized CP (GCP). We illustrate the effectiveness of randomized algorithms for tensor decomposition using real-world datasets.

Youssef Marzouk, MIT, USA
On Low-dimensional Structure in Transport and Inference
Abstract:
Transportation of measure underlies many powerful tools for Bayesian inference, density estimation, and generative modeling. The central idea is to deterministically couple a probability measure of interest with a tractable “reference” measure (e.g., a standard Gaussian). Such couplings are induced by transport maps, and enable direct simulation from the desired measure simply by evaluating the transport map at samples from the reference.

While an enormous variety of representations and constructive algorithms for transport maps have been proposed in recent years, it is inevitably advantageous to exploit the potential for low-dimensional structure in the associated probability measures. I will discuss two such notions of low-dimensional structure, and their interplay with transport-driven methods for sampling and inference. The first seeks to approximate a high-dimensional target measure as a low-dimensional update of a dominating reference measure. The second is low-rank conditional structure, where the goal is to replace conditioning variables with low-dimensional projections or summaries. In both cases, under appropriate assumptions on the reference or target measures, one can derive gradient-based upper bounds on the associated approximation error and minimize these bounds to identify good subspaces for approximation. The associated subspaces then dictate specific structural ansatzes for transport maps that represent the target of interest.

I will showcase several algorithmic instantiations of this idea, focusing on Bayesian inverse problems, data assimilation, and simulation-based inference.

Alicia Dickenstein, University of Buenos Aires, Argentina
Algebraic Geometry and Systems Biology
Abstract:
In recent years, methods and concepts from algebraic geometry, particularly those from computational and real algebraic geometry, have been used in many applied domains. I will review applications to molecular biology, which aim to analyze standard models in systems biology to predict dynamic behavior across parameter space without the need for simulations. These applications have also given rise to new challenges in the field of real algebraic geometry.

Lior Horesh, IBM Research, USA
Should we Derive or Let the Data Drive ? Symbiotizing Data-driven Learning and Knowledge-based Reasoning to Accelerate Symbolic Discovery
Abstract:
The abstraction of the behavior of a system or a phenomenon into a consistent mathematical model is instrumental for a variety of applications in science and engineering. In the context of scientific discovery, a fundamental problem is to explain natural phenomena in a manner consistent with both (noisy) experimental data, and a body of (possibly inexact and incomplete) background knowledge about the laws of the universe.

Historically, models were manually derived in a first-principles deductive fashion. The first-principles approach often offers the derivation of interpretable symbolic models of remarkable levels of universality while being substantiated by little data. Nonetheless, derivation of such models is time-consuming and relies heavily upon domain expertise.
Conversely, with the rising pervasiveness of statistical AI and data-driven approaches, automated, rapid construction and deployment of models has become a reality. Many data-driven modeling techniques demonstrate remarkable scalability due to their reliance upon predetermined, exploitable model form (functional form) structures. Such structures, entail non-interpretable models, demand Big Data for training, and provide limited predictive power for out-of-set instances.

In this talk, we will review some of the recent transformations in the field, and the ongoing attempts to bridge the divide between statistical AI and symbolic AI. We will begin by discussing algorithms that can search for free-form symbolic models, where neither the structure nor the set of operator primitives is predetermined. We will proceed in reviewing innovations in the field of automated theorem proving (ATP) machinery, and discuss how ATPs can be harnessed to certify whether a candidate hypothesis model is conforming with background theory. Lastly, we shall discuss efforts to consistently unify the two approaches.

These endeavors will promote the conceptualization of AI algorithms capable of making the discovery of principled, universal, and meaningful symbolic models, using small data and incomplete background theory. With some optimism, some of these discoveries, can unveil to us the secrets of the universe.

Kavita Ramanan, Brown University, USA
Characterizing Rare Events in Interacting Particle Systems
Abstract:
Interacting particle systems consist of collections of stochastically evolving particles indexed by the vertices of a graph, where each particle’s state depends directly only on the states of neighboring vertices in the graph. Such systems model a wide range of physical phenomena including magnetism, the spread of diseases and information, neuronal spiking and opinion dynamics. Of crucial interest in these systems is the study of rare events, or large deviations from typical behavior. While classical work has focused on the case when the underlying graph is dense, where mean-field theory is applicable, most real-world networks are sparse. We survey what is rigorously known about large deviations behavior in such systems, with a focus on recent progress in the setting of (uniformly) sparse graphs.

Michele Benzi, Scuola Normale Superiore, Italy
Matrix Functions and the Analysis of Complex Networks
Abstract:
In this talk I will review the recent use of functions of matrices in the analysis of graphs and networks, with special focus on centrality and communicability measures and diffusion processes, both local and nonlocal. These techniques are being applied in a variety of applications, ranging from social network analysis to chemical physics and the neurosciences.

Methods for both undirected and directed networks will be surveyed, as well as dynamic (temporal) networks. Computational issues will also be addressed.

Cynthia Dwork, Harvard University, USA
Measuring Our Chances: Risk Prediction in This World and its Betters
Abstract:
Prediction algorithms score individuals, assigning a number between zero and one that is often interpreted as an individual probability: a 0.7 “chance” that this child is in danger in the home; an 80% “probability” that this woman will succeed if hired; a 1/3 “likelihood” that they will graduate within 4 years of admission. But what do words like “chance,” “probability,” and “likelihood” actually mean for a non-repeatable activity like going to college? This is a deep and unresolved problem in the philosophy of probability. Without a compelling mathematical definition we cannot specify what an (imagined) perfect risk prediction algorithm should produce, nor even how an existing algorithm should be evaluated. Undaunted, AI and machine learned algorithms churn these numbers out in droves, sometimes with life-altering consequences.

An explosion of recent research deploys insights from the theory of pseudo-randomness – objects that “look random” but in fact have structure – to yield a tantalizing answer to the evaluation problem, together with a supporting algorithmic framework with roots in the theory of algorithmic fairness.

We can aim even higher. Both (1) our qualifications, health, and skills, which form the inputs to a prediction algorithm, and (2) our chances of future success, which are the desired outputs from the ideal risk prediction algorithm, are products of our interactions with the real world. But the real world is systematically inequitable. How, and when, can we hope to approximate probabilities not in this world, but in a better world, one for which, unfortunately, we have no data at all? Surprisingly, this novel question is inextricably bound with the very existence of nondeterminism.

Andrew M. Stuart, Caltech, USA
The Ensemble Kalman Filter
Abstract:
Ensemble Kalman filters constitute a methodology for incorporating noisy data into complex dynamical models to enhance predictive capability. They are widely adopted in the geophysical sciences, underpinning weather forecasting for example, and are starting to be used throughout the sciences and engineering; furthermore, they have been adapted to function as a general-purpose tool for parametric inference. The strength of these methods stems from their ability to operate using complex models as a black box, together with their natural adaptation to high performance computers. This talk describes recent theoretical advances which elucidate, for the first time, conditions under which this widely adopted methodology provides accurate model predictions and uncertainties.

The analysis is developed for the mean field limit of the ensemble Kalman filter. The filter is rewritten in terms of maps on probability measures. These maps are shown to be locally Lipschitz in an appropriate weighted total variation metric. Using these stability estimates it may be shown that, if the true filtering distribution is close to Gaussian after appropriate lifting to the joint space of state and data, then it is well approximated by the ensemble Kalman filter.

Martin Burger, DESY / University of Hamburg, Germany
The Mathematics of Image Reconstruction: the Dialectic of Modelling and Learning
Abstract:
In this talk we will discuss some current and future challenges in high-dimensional image reconstruction, which is based on the solution of large-scale inverse problems involving various uncertainties. While classical methods were purely based on physical models for forward operators and regularizations, modern machine learning techniques create the antithesis of data-driven approaches. We will discuss some pitfalls that machine learning can encounter in inverse problems and discuss opportunities for the synthesis of model- and data-driven approaches.

Ingrid Daubechies, Duke University, USA
Old-fashioned Machine Learning: Using Diffusion Methods to Learn Underlying Structure
Abstract:
Many datasets consist of complex items that can be reasonably surmised to lie on a manifold of much lower dimension than the number of parameters or coordinates with which the individual items are acquired. Manifold diffusion is an established method, used successfully to parametrize such datasets much more succinctly. The talk describes an enhancement of this method: when each individual item is itself a complex object, as is the case in many applications, one can model the collection as a fiber bundle, and build a fiber bundle diffusion operator from which one can gradually learn properties of the underlying base manifold. This will be illustrated with applications to morphological evolutionary studies in biology.

Xiaoyun Wang, Tsinghua University, China
Lattice-based Cryptography: From Theory to Practice
Abstract:
Although most of current public key cryptosystems are vulnerable to attacks of the imaginary quantum computers. the post-quantum cryptography(PQC)which resists on the quantum computing has made the surprising progress in the past 30 years. Among of the post-quantum cryptographic families, the lattice-based cryptography is popularly regarded as an economic and secure solution, its security relies on the hardness of computational mathematical problems in lattice theory with high-dimension. In this talk, I will recap the mathematical background of lattice-based cryptography. Then I will introduce the recent progress on the practical designs of lattice-based cryptosystems, and also have a fast look at an amazing area which is called fully homomorphic encryption (FHE) which has interesting applications of the privacy computing and federal learning etc.

Francis Bach, Inria – Ecole Normale Supérieure, France
Sums of Squares: from Algebra to Analysis
Abstract:
The representation of non-negative functions as sums of squares has become an important tool in many modeling and optimization tasks. Traditionally applied to polynomial functions, it requires rich tools from algebraic geometry that led to many developments in the last twenty years. In this lecture, I will look at this problem from a functional analysis point of view, leading to new applications and new results on the performance of sum-of-squares optimization.

Monique Laurent, Centrum Wiskunde & Informatica (CWI) Amsterdam / Tilburg University, The Netherlands
Performance Analysis of Sums of Squares Approximations in Polynomial Optimization
Abstract:
Polynomial optimization is a recent field, which emerged in the last two decades, starting with pioneering works, in particular, by J.-B. Lasserre and P. Parrilo. It deals with optimization problems involving polynomial objective and constraints. These are computationally hard problems, in general nonlinear and nonconvex, ubiquitous in diverse fields and application areas such as discrete optimization, operations research, discrete geometry, theoretical computer science, and control theory.
The key idea is to exploit algebraic and geometric properties of polynomials and develop dedicated solution methods that are based, on the one hand, on real algebraic results about positive polynomials, and, on the other hand, on functional analytic results about moments of positive measures. These techniques permit to design hierarchies of convex relaxations – known as sums of squares hierarchies – that give bounds on the global optimum of the original problem. The underlying computational paradigm is semidefinite optimization, which permits to model sums of squares of polynomials (used as a proxy to certify polynomial positivity).

A crucial feature is that, under some mild compactness assumption, these hierarchies converge asymptotically to the global optimum. A natural question is to understand how the quality of the bounds depends on the level of the relaxation, which is governed by the maximum degree of the sums of squares it involves.

In this lecture we will discuss these hierarchies, with a special focus on the above question regarding their quantitative analysis. We will present recent state-of-the-art results for polynomial optimization over various classes of semi-algebraic sets and the main techniques used to obtain these results, which include Fourier analysis, reproducing kernels, and extremal roots of orthogonal polynomials.

Endre Süli, University of Oxford, United Kingdom
Kinetic Models of Dilute Polymeric Fluids: Analysis and Approximation
Abstract:
Since the pioneering contributions of Werner Kuhn, Hans Kramers and other scientists working at the interface of polymer chemistry and statistical physics during the first half of the twentieth century, kinetic models have been widely and successfully used to describe the motion of polymeric fluids.

The aim of this talk is to review recent results concerning the mathematical analysis of these models. We focus in particular on questions of existence of large-data global weak solutions to kinetic models of dilute polymeric fluids — a system of nonlinear partial differential equations involving the compressible or incompressible Navier–Stokes equations, modelling the evolution of the velocity field and the pressure, coupled to a Fokker–Planck equation satisfied by the probability density function for the random configuration vectors associated with the directions of the backbones of noninteracting polymer molecules suspended in a Newtonian fluid. We shall also discuss the convergence analysis of finite element methods for the numerical solution of this coupled system of partial differential equations and will highlight some nontrivial open problems.

Mourad Bellassoued, University of Tunis El Manar, Tunisia
Recovery of a Metric Tensor from the Partial Hyperbolic Dirichlet to Neumann Map
Abstract:
In this talk we consider the inverse problem of determining on a compact Riemannian manifold the metric tensor in the wave equation with Dirichlet data from measured Neumann sub-boundary observations. This information is enclosed in the dynamical partial Dirichlet-to-Neumann map associated to the wave equation. We prove in dimension n ≥ 2 that the knowledge of the partial Dirichlet-to-Neumann map for the wave equation uniquely determines the metric tensor and we establish logarithm-type stability.

Antonin Chambolle, CEREMADE, CNRS / Université Paris-Dauphine, PSL University, France
Recent Progress in the Variational Approach to Fracture
Abstract:
The talk will introduce the variational approach to fracture, proposed by Francfort and Marigo in the late nineties as a natural extension to Griffith’s classical theory, with the ability to predict crack paths in brittle materials (with some numerical success). This theory is based on the global minimization of a linearized elasticity energy and a dissipation term proportional to the surface of a new crack. A continuous limit of successive minimizations is expected to result in an evolving crack which generalizes in a natural way the classical theory from the 1920s. However, this approach raises huge difficulties, such as, simply, how to define an energy space for such a variational problem, or the existence of minimizers. The talk will focus on some of the main mathematical tools developed to tackle this problem in the past ten years, and will describe a few recent results (and some open questions). Based on joint works with Filippo Cagnetti, Sergio Conti, Vito Crismale, Gilles Francfort, Flaviana Iurlano, Lucia Scardia, and the development of many others.

Pascal Van Hentenryck, Georgia Institute of Technology, USA
The Fusion of Machine Learning and Optimization
Abstract:
The fusion of machine learning and optimization has the potential to achieve breakthroughs in decision making that the two technologies cannot accomplish independently. This talk reviews a number of research avenues in this direction, including the concept of optimization proxies and end-to-end learning. Principled combinations of machine learning and optimization are illustrated on case studies in energy systems, mobility, and supply chains. Preliminary results show how this fusion makes it possible to perform real-time risk assessment in energy systems, find near-optimal solutions quickly in supply chains, and implement model-predictive control for large-scale mobility systems.

José A. Carrillo, University of Oxford, United Kingdom
Aggregation-Diffusion Equations for Collective Behaviour in the Sciences
Abstract:
Many phenomena in the life sciences, ranging from the microscopic to macroscopic level, exhibit surprisingly similar structures. Behaviour at the microscopic level, including ion channel transport, chemotaxis, and angiogenesis, and behaviour at the macroscopic level, including herding of animal populations, motion of human crowds, and bacteria orientation, are both largely driven by long-range attractive forces, due to electrical, chemical or social interactions, and short-range repulsion, due to dissipation or finite size effects.

Various modelling approaches at the agent-based level, from cellular automata to Brownian particles, have been used to describe these phenomena. An alternative way to pass from microscopic models to continuum descriptions requires the analysis of the mean-field limit, as the number of agents becomes large. All these approaches lead to a continuum kinematic equation for the evolution of the density of individuals known as the aggregation-diffusion equation. This equation models the evolution of the density of individuals of a population, that move driven by the balances of forces: on one hand, the diffusive term models diffusion of the population, where individuals escape high concentration of individuals, and on the other hand, the aggregation forces due to the drifts modelling attraction/repulsion at a distance.

The aggregation-diffusion equation can also be understood as the steepest-descent curve (gradient flow) of free energies coming from statistical physics. Significant effort has been devoted to the subtle mechanism of balance between aggregation and diffusion. In some extreme cases, the minimisation of the free energy leads to partial concentration of the mass.

Aggregation-diffusion equations are present in a wealth of applications across science and engineering. Of particular relevance is mathematical biology, with an emphasis on cell population models. The aggregation terms, either in scalar or in system form, is often used to model the motion of cells as they concentrate or separate from a target or interact through chemical cues. The diffusion effects described above are consistent with population pressure effects, whereby groups of cells naturally spread away from areas of high concentration.

This talk will give an overview of the state of the art in the understanding of aggregation-diffusion equations, and their applications in mathematical biology.

Mouhamed Moustapha Fall, African Institute for Mathematical Sciences in Senegal, Senegal
On Some Overdetermined Boundary Value Problems
Abstract:
Second order elliptic equations on a domain in which both Dirichlet and
Neumann conditions are prescribed at the boundary of the domains constitute a class of overdetermined problems. To deal with these problems, we are led to find two unknowns: solution and the domain. They appear in many physical questions such as fluid and solid mechanics. In addition, they appear when minimizing domain-dependent energy functionals such as Sobolev norms and eigenvalues. While a lot of progress is being made, there still remains challenging open problems, e.g. the Schiffer conjecture: which states that if a nontrivial eigenfunction of the Neumann eigenvalue problem, on a bounded domain, has a constant Dirichlet boundary condition then the domain must be a ball.

In this talk, we provide an overview on recent results on overdetermined
and discuss new results on the Schiffer problem on some manifolds.

Lei Guo, Chinese Academy of Sciences, China
Learning and Feedback in the Control of Uncertain Dynamical Systems
Abstract:
Learning and feedback are complementary mechanisms in dealing with uncertain dynamical systems. Learning plays a basic role in the design of control systems, and feedback makes it possible for a control system to perform well in an open environment with various uncertainties. In this lecture, some basic results will be presented when online learning is combined with feedback in the control of uncertain dynamical systems. We will first consider the celebrated self-tuning regulators (STR) in adaptive control of uncertain linear stochastic systems, where the STR is designed by combining the recursive least-squares estimator with the minimum variance controller. The global convergence of this natural and seemingly simple adaptive system had actually been a longstanding open problem in control theory. Next, we will discuss the rationale and foundation behind the widespread successful industrial applications of the well-known proportional-integral-derivative (PID) control for nonlinear uncertain systems and provide a new and powerful online learning-based design method. Finally, we will present some basic results on more fundamental problems concerning the maximum capability and limitations of the feedback mechanism in dealing with uncertain nonlinear systems, where the feedback mechanism is defined as the class of all possible feedback laws. The results presented in this lecture may offer useful implications for the design and analysis of more complicated control systems where AI is combined with online feedback control.

SIAM Prize SIAM Peter Henrici Lecture

Douglas N. Arnold, University of Minnesota, USA
What the @#$! is Cohomology Doing in Numerical Analysis?!
Abstract:
As the name suggests, numerical analysis — the study of computational algorithms to solve mathematical problems, such as differential equations — has traditionally been viewed mostly as a branch of analysis. Geometry, topology, and algebra played little role. Indeed, often departments created special degree requirements so that computational and applied math students could avoid studying these subjects altogether. However in the last decade or things have changed. The recent numerical analysis literature is replete with papers using concepts that are new to the subject, say, symplectic differential forms or de Rham cohomology or Hodge theory. In this talk we will discuss some examples of this phenomenon, especially the Finite Element Exterior Calculus. We shall see why these new ideas arise naturally in numerical analysis and how they contribute.

SIAM John von Neumann Lecture

Yousef Saad, University of Minnesota, USA
Iterative Linear Algebra for Large Scale Computations
Abstract:
The field of what may be termed “iterative linear algebra” blends a fascinating combination of mathematical analysis tools, clever algorithm development, approximation techniques, as well as effective implementations. For example, approximation theory plays a major role in developing and analyzing iterative algorithms, as do advanced linear algebra, error analysis, and high-performance computing. In addition, research in this area is deemed to be most successful if it winds up helping to solve challenging problems in real-life applications, whether in fluid mechanics (large sparse linear systems), or in electronic structure calculations (large eigenvalue problems).

This talk will provide a perspective on iterative linear algebra with emphasis on Krylov subspace methods. Part of the presentation will also examine more recent trends and emerging demands in an effort to anticipate promising new directions for iterative linear algebra.

AWM-SIAM Sonia Kovalevsky Lecture

Annalisa Buffa, École polytechnique fédérale de Lausanne, Switzerland
Simulation of PDEs on Geometries Obtained via Boolean Operations
Abstract:
Geometric design uses splines representation of surfaces as the main building bloc, but it involves many other ingredients. Geometries are described as a combination of primitives and spline/NURBS boundary representations which are combined via boolean operations, such as intersections and unions. We aim to develop numerical methods that are robustly able to tackle the simulation of PDEs over such “unstructured” geometric representation. For example, dealing with trimming, which corresponds to an intersection operation in geometric modelling, falls into the category of immersed/unfitted discretizations (e.g., Finite Cell Method, cutFEM, immerse-geometric analysis, shifted boundary method, aggregated unfitted FEM), where computational meshes do not align with geometric boundaries/interfaces.

While geometric modelling is extremely flexible, various issues have to be addressed on the analysis side, such as stability, quadrature, imposition of boundary conditions, conditioning of the underlying linear system, etc.

In this talk, we discuss various aspects of this interesting challenge. Starting from defeaturing, which means a systematic removal of features from a geometric model, and its implication on accuracy, to stability issues in the presence of slim or small cut elements, to efficient quadrature, to the imposition of the boundary condition and to the extension of the framework to the assembly of several geometries via union.

Finally, we discuss also how to make these approaches viable in a shape optimization loop, by discussing their use within a reduced-order modelling framework.

I will conclude the talk by showing results and discussing challenges.