| Home > Online First > Efficient Distributed GPU Programming for Exascale |
| Conference Presentation (After Call) | FZJ-2025-05596 |
; ; ; ; ;
2025
This record in other databases:
Please use a persistent id in citations: doi:10.5281/ZENODO.17804012
Abstract: Over the past decade, GPUs became ubiquitous in HPC installations around the world, delivering the majority of performance of some of the largest supercomputers, steadily increasing the available compute capacity. Finally, four exascale systems are deployed (Frontier, Aurora, El Capitan, JUPITER), using GPUs as the core computing devices for this era of HPC. To take advantage of these GPU-accelerated systems with tens of thousands of devices, application developers need to have the proper skills and tools to understand, manage, and optimize distributed GPU applications. In this tutorial, participants will learn techniques to efficiently program large-scale multi-GPU systems. While programming multiple GPUs with MPI is explained in detail, also advanced tuning techniques and complementing programming models like NCCL and NVSHMEM are presented. Tools for analysis are shown and used to motivate and implement performance optimizations. The tutorial teaches fundamental concepts that apply to GPU-accelerated systems of any vendor in general, taking the NVIDIA platform as an example. It is a combination of lectures and hands-on exercises, using the JUPITER system for interactive learning and discovery.
|
The record appears in these collections: |