Contribution to a conference proceedings/Contribution to a book FZJ-2020-04080

http://join2-wiki.gsi.de/foswiki/pub/Main/Artwork/join2_logo100x88.png
Exascale potholes for HPC: Execution performance and variability analysis of the flagship application code HemeLB



2020
IEEE
ISBN: 978-0-7381-1070-7/20, , 978331998696

Proceedings of 2020 IEEE/ACM International Workshop on HPC User Support Tools (HUST) and the Workshop on Programming and Performance Visualization Tools (ProTools)
Workshop on Programming and Performance Visualization Tools, ProTools '20, onlineonline, online, 12 Nov 2020 - 12 Nov 20202020-11-122020-11-12
IEEE 59-70 () [10.1109/HUSTProtools51951.2020.00014]

This record in other databases:  

Please use a persistent id in citations:   doi:

Abstract: Performance measurement and analysis of parallel applications is often challenging, despite many excellent commercial and open-source tools being available. Currently envisaged exascale computer systems exacerbate matters by requiring extremely high scalability to effectively exploit millions of processor cores. Unfortunately, significant application execution performance variability arising from increasingly complex interactions between hardware and system software makes this situation much more difficult for application developers and performance analysts alike. This work considers the performance assessment of the HemeLB exascale flagship application code from the EU HPC Centre of Excellence (CoE) for Computational Biomedicine (CompBioMed) running on the SuperMUC-NG Tier-0 leadership system, using the methodology of the Performance Optimisation and Productivity (POP) CoE. Although 80% scaling efficiency is maintained to over 100,000 MPI processes, disappointing initial performance with more processes and corresponding poor strong scaling was identified to originate from the same few compute nodes in multiple runs, which later system diagnostic checks found had faulty DIMMs and lacklustre performance. Excluding these compute nodes from subsequent runs improved performance of executions with over 300,000 MPI processes by a factor of five, resulting in 190x speed-up compared to 864 MPI processes. While communication efficiency remains very good up to the largest scale, parallel efficiency is primarily limited by load balance found to be largely due to core-to-core and run-to-run variability from excessive stalls for memory accesses, that affect many HPC systems with Intel Xeon Scalable processors. The POP methodology for this performance diagnosis is demonstrated via a detailed exposition with widely deployed 'standard' measurement and analysis tools.

Keyword(s): E-Government


Contributing Institute(s):
  1. Jülich Supercomputing Center (JSC)
Research Program(s):
  1. 511 - Computational Science and Mathematical Methods (POF3-511) (POF3-511)
  2. POP2 - Performance Optimisation and Productivity 2 (824080) (824080)
  3. CompBioMed - A Centre of Excellence in Computational Biomedicine (675451) (675451)
  4. ATMLPP - ATML Parallel Performance (ATMLPP) (ATMLPP)

Appears in the scientific report 2020
Database coverage:
OpenAccess
Click to display QR Code for this record

The record appears in these collections:
Document types > Events > Contributions to a conference proceedings
Document types > Books > Contribution to a book
Workflow collections > Public records
Institute Collections > JSC
Publications database
Open Access

 Record created 2020-10-19, last modified 2025-03-14