Modern hardware supports an increasingly wide range of arithmetics, notably low-precision floating-point ones. Lower precision provides significant storage, communication, speed, and energy benefits which make it very attractive for high performance computing. However, it also provides a correspondingly lower accuracy. This motivates the development of mixed-precision algorithms, which combine multiple precisions to achieve both high performance and high accuracy. In this seminar, I aim to provide a broad overview of both old and new ideas for computing in mixed precision. I will do this by reviewing the field through three recurrent themes: refinement, modularity, and adaptivity. Refinement is the idea that one can run the computations in low precision and then (inexpensively) refine the result to high accuracy. Modularity allows for easily deriving new mixed-precision variants of old algorithms, and determining which are stable. Adaptivity allows for dynamically taking advantage of application-specific opportunities by switching the least sensitive parts of the data to low precision.
Short Bio:
Théo Mary is a CNRS researcher in the computer science laboratory LIP6 at Sorbonne University. His research concerns the design, development, and analysis of high performance parallel numerical algorithms. His recent work focuses on accelerating linear algebra computations by harnessing numerical approximations, such as low-rank compression, mixed precision algorithms, and randomization.