Registered Data

[00421] When random comes to the rescue of numerical computation

  • Session Time & Room : 4D (Aug.24, 15:30-17:10) @E506
  • Type : Proposal of Minisymposium
  • Abstract : The need of efficient IA and deep learning applications has impulse a new way of performing floating-point computations based on low precision representation formats and their corresponding hardware support. Among the peculiarity raised is the need of operators, analyses, methodologies, and tools to estimate accuracy needs, overcome unwanted behaviors such as stagnation (numerical loss during sequences of tiny updates) and optimize performance. In this minisymposium, we will focus on a few aspects of randomization in numerical computation, covering its advantages for IA applications, probabilistic error analysis, variant of stochastic rounding mode and the detection of numerical abnormalities and precision analysis.
  • Organizer(s) : David DEFOUR
  • Classification : 65CXX, 65Yxx
  • Minisymposium Program :
    • 00421 (1/1) : 4D @E506 [Chair: David DEFOUR]
      • [02748] The computer arithmetic new deal: AI is pushing the frontier
        • Format : Talk at Waseda University
        • Author(s) :
          • Eric Petit (Intel)
        • Abstract : Recent years have seen a tremendous number of new research contributions to computer arithmetic, challenging lower precision arithmetic and rounding mode, leaving the IEEE754 standard far behind. This all comes from the rise of AI workloads as one of the main driver in the data center software and architecture design. Keeping on par with the incredibly fast changing usage of computer arithmetic has push our algorithms, tools, and capacity to their limit. This is an opportunity to rethink and redesign our approach about floating point hardware and software, promoting new tools and methodologies, and allowing ground breaking solution to be promoted to main stream software and hardware implementation. In this talk I will provide some more context about this change and discuss some of the exciting work I am sharing with collaborators inside and outside intel.
      • [01590] New stochastic rounding modes for numerical verification
        • Format : Talk at Waseda University
        • Author(s) :
          • Bruno LATHUILIERE (EDF R&D)
          • Nestor Demeure (Data and analytics services group, National Energy Research Scientific Computing Center, Berkeley)
        • Abstract : In the context of industrial code verification, the use of stochastic rounding modes allows to estimate the numerical quality of the results through multiple independent executions of the software. But in some rare cases, the introduction of stochastic rounding can lead to a runtime error because the software implicitly relies on the determinism of IEEE floating-point operations. To overcome this problem, we propose new deterministic stochastic rounding modes: these maintain the stochastic properties between different executions of the software while guaranteeing the determinism of the floating operations having the same parameters within one execution. Results based on an implementation of the method in the Verrou tool will be presented.
      • [04711] Stochastic rounding as a model of round-to-nearest
        • Format : Talk at Waseda University
        • Author(s) :
          • Devan Sohier (LI-PaRAD, UVSQ)
        • Abstract : Round-to-nearest (RN), the default rounding mode in the omnipresent IEEE754 standard, is difficult to analyze. Deterministic bounds, as well as their refinements like interval arithmetic, generally prove overly pessimistic, compared to the day-to-day observations of numerical scientists. Stochastic rounding (SR) may be used as a model of RN, the results of which are easier to analyze, and closer to these observations. In this talk, I will present a methodology to analyze results of a SR simulation of RN. The widely used formula for the number of significant bits $-\log\frac\sigma\mu$ can be refined and given a precise statistical ground in the case when the error has a normal distribution; when no normality assumption is substantiated, other tools based on Bernoulli estimations need to be used. Using SR as a model for RN also has some limits: SR does not present stagnation phenomena typical of RN, and SR also does not give the same results when the program recomputes twice the same operation. Finally, I will discuss flaws of various severeness of some software implementations of SR, and present some remarks regarding possible future implementations of SR, both in software and hardware.
      • [03452] VPREC to analyze the precision appetites and numerical abnormalities of several proxy applications
        • Format : Talk at Waseda University
        • Author(s) :
          • Roman Iakymchuk (Umeå University)
          • Pablo de Oliveira Castro (Université Paris-Saclay)
        • Abstract : The energy consumption constraint for large-scale computing encourages scientists to revise the architecture design of hardware but also applications, algorithms, as well as the underlying working/ storage precision. We introduce an approach to address the issue of sustainable computations from the perspective of computer arithmetic tools. We employ the variable precision backend (VPREC) to identify parts of code that can benefit from smaller floating-point formats. Finally, we show preliminary results on several proxy applications.