Book/Report/Internal Report FZJ-2017-02087

http://join2-wiki.gsi.de/foswiki/pub/Main/Artwork/join2_logo100x88.png
JUQUEEN Extreme Scaling Workshop 2017

 ;  ;

2017

JSC Internal Report 47 pp. ()

Please use a persistent id in citations:

Report No.: FZJ-JSC-IB-2017-01

Abstract: From 23 to 25 January 2017, JSC organised its eighth IBM Blue Gene Extreme Scaling Workshop. The entire 28-rack JUQUEEN Blue Gene/Q was reserved for over 50 hours to allow six selected code teams to investigate and improve the scalability of their applications. Ultimately, all six codes managed to run using the full complement of 458,752 cores (most with over 1.8 million threads). MPAS-A (KIT/NCAR) and the pe rigid body physics engine (FAU) were both able to display strong scalability to 28 racks and thereby become candidates for High-Q Club membership. MPAS-A returned after participating in the 2015 workshop with a higher resolution dataset and substantially improved file I/O using SIONlib to successfully manage its largest ever global atmospheric simulation. While the hydrology simulator ParFlow (UBonn/FZJ-IGB) had recently demonstrated execution scaling to the full JUQUEEN without file writing enabled, during the workshop the focus was investigating file I/O performance which remains a bottleneck. High-Q Club member KKRnano (FZJ-IAS) investigated the scalability of a new solver algorithm developed to handle a million atoms, while the latest version of CPMD was tested with a large 1500-atom system. Both of these quantum materials codes uncovered performance limitations at larger scales. The final code was a prototype multi-compartmental neuronal network simulator, NestMC (JSC SimLab Neuroscience), which compared scalability of different threading implementations. Detailed reports are provided by each code-team, and additional comparative analysis to the 28 High-Q Club member codes.


Contributing Institute(s):
  1. Jülich Supercomputing Center (JSC)
Research Program(s):
  1. 511 - Computational Science and Mathematical Methods (POF3-511) (POF3-511)
  2. 513 - Supercomputer Facility (POF3-513) (POF3-513)
  3. ATMLPP - ATML Parallel Performance (ATMLPP) (ATMLPP)
  4. ATMLAO - ATML Application Optimization and User Service Tools (ATMLAO) (ATMLAO)

Appears in the scientific report 2017
Database coverage:
OpenAccess
Click to display QR Code for this record

The record appears in these collections:
Dokumenttypen > Berichte > Interne Berichte
Dokumenttypen > Berichte > Berichte
Dokumenttypen > Bücher > Bücher
Workflowsammlungen > Öffentliche Einträge
Institutssammlungen > JSC
Publikationsdatenbank
Open Access

 Datensatz erzeugt am 2017-03-09, letzte Änderung am 2025-03-17


OpenAccess:
Volltext herunterladen PDF
Externer link:
Volltext herunterladenFulltext by OpenAccess repository
Dieses Dokument bewerten:

Rate this document:
1
2
3
 
(Bisher nicht rezensiert)