Abstract : Despite significant recent technological advances in Artificial Intelligence (AI), AI systems sometimes make errors and will continue making errors in the future, from time to time. AI errors are usually unexpected; sometimes they are also malicious with the potential to result in dramatic and tragic consequences.
Handling and understanding abstract properties of these errors and developing methods of defence against various attacks and instabilities in modern large-scale high-dimensional AI operating in high-dimensional non-stationary world requires appropriate mathematical methods and techniques. This mini-symposium focuses on discussing relevant mathematical machinery for the analysis and verification of AI robustness and stability.
Organizer(s) : Ivan Tyukin, Alexander N. Gorban, Desmond Higham
[03569] Adversarial Ink: Componentwise Backward Error Attacks on Deep Learning
Format : Online Talk on Zoom
Author(s) :
Des Higham (University of Edinburgh)
Lucas Beerens (University of Edinburgh)
Abstract : Deep neural networks are capable of state-of-the-art performance in many classification tasks. However, they are known to be vulnerable to adversarial attacks---small perturbations to the input that lead to a change in classification. We address this issue from the perspective of backward error and condition number, concepts that have proved useful in numerical analysis. To do this, we build on the work of Beuzeville et al. (2021). In particular, we develop a new class of attack algorithms that use component-wise relative perturbations. Such attacks are highly relevant in the case of handwritten documents or printed texts where, for example, the classification of signatures, postcodes, dates or numerical quantities may be altered by changing only the ink consistency and not the background. This makes the perturbed images look natural to the naked eye. Such "adversarial ink" attacks therefore reveal a weakness that can have a serious impact on safety and security. We illustrate the new attacks on real data and contrast them with existing algorithms. We also study the use of a componentwise condition number to quantify vulnerability.
[05048] On the extended Smale’s 9th problem, phase transitions in optimisation and the limits of AI
Format : Talk at Waseda University
Author(s) :
Alexander James Bastounis (Leicester University)
Abstract : Instability is the Achilles’ heel of AI and a paradox, with typical training algorithms unable to recover stable neural networks (NNs). Hence the fundamental question: can one find algorithms that compute stable and accurate NNs? If not, what are the foundational barriers we encounter across machine learning? These questions are linked to recent results on the extended Smale’s 9th problem, which uncovers new phase transitions in optimisation and yields barriers on the computation of NNs.
[04061] Intrinsic dimensionality of real-life datasets in biomedicine and drug discovery
Format : Online Talk on Zoom
Author(s) :
Andrey Zinovyev (Evotec)
Abstract : Intrinsic dimensionality (ID) is the most essential characteristic of a multidimensional data point cloud which determines the reliability and stability of the application of all machine learning methods. We provide a toolbox for estimating ID, and we benchmark it using several hundreds of real-life datasets. We demonstrate how data ID affects the results of applying deep classifiers and generative data models in biomedicine and drug discovery domains that allow the user to judge the prediction robustness.
[05037] Generalised hardness of approximation and hallucinations -- On barriers and paradoxes in AI for image reconstruction
Format : Online Talk on Zoom
Author(s) :
Anders Hansen (University of Cambridge)
Abstract : AI techniques are transforming medical imaging with striking performance. However, these new methods are susceptible to AI generated hallucinations, the phenomenon where realistically looking artefacts are incorrectly added to the reconstructed image, causing serious concerns in the sciences. The basic question is therefore: can hallucinations be prevented? This question turns out to be linked to a newly discovered phenomenon in the foundations of computational mathematics: generalised hardness of approximation, demonstrating methodological barriers in AI.
00951 (2/2) : 4C @E818 [Chair: Alexander Bastounis]
[03891] Stochastic Separation Theorems for making AI Safe, Adaptive, and Robust
Format : Talk at Waseda University
Author(s) :
Ivan Y Tyukin (King's College London)
Alexander N Gorban (University of Leicester)
Abstract : In this talk we discuss the issues around stability and robustness of modern AI systems to data and structure perturbations. We show that determining robust generalisation may involve computational costs which are exponential in the dimension of AI feature spaces. As a potential way to mitigate the issue we discuss a set of results, termed stochastic separation theorems, which could be used to efficiently “patch” instances of instabilities as soon as they are identified.
[05438] Advancements in Autodiff
Format : Talk at Waseda University
Author(s) :
Elizabeth Cristina Ramirez (Columbia University)
Abstract : The reliance of backpropagation and other gradient-based algorithms on derivative calculations is well-known. Automatic differentiation, a powerful computational tool that often remains overlooked, allows for the efficient computation of gradients, surpassing the limitations of numerical differentiation. In this presentation, we aim to provide a concise overview of the inner workings of autodiff, as implemented in frameworks like TensorFlow, PyTorch, and JAX. Moreover, we will shed light on recent developments that enhance stability and expedite convergence.