001     818244
005     20250317091730.0
024 7 _ |a 10.1145/2938615.2938616
|2 doi
024 7 _ |a 2128/12345
|2 Handle
037 _ _ |a FZJ-2016-04722
100 1 _ |a Brömmel, Dirk
|0 P:(DE-Juel1)143606
|b 0
|u fzj
111 2 _ |a Exascale Applications and Software Conference 2016
|g EASC'16
|c Stockholm
|d 04/26/2016 - 04/29/2016
|w Sweden
245 _ _ |a Extreme-scaling applications en route to exascale
260 _ _ |a New York
|c 2016
|b ACM Press
295 1 0 |a Proceedings of the Exascale Applications and Software Conference 2016
300 _ _ |a 10
336 7 _ |a CONFERENCE_PAPER
|2 ORCID
336 7 _ |a Conference Paper
|0 33
|2 EndNote
336 7 _ |a INPROCEEDINGS
|2 BibTeX
336 7 _ |a conferenceObject
|2 DRIVER
336 7 _ |a Output Types/Conference Paper
|2 DataCite
336 7 _ |a Contribution to a conference proceedings
|b contrib
|m contrib
|0 PUB:(DE-HGF)8
|s 1474526747_22148
|2 PUB:(DE-HGF)
336 7 _ |a Contribution to a book
|0 PUB:(DE-HGF)7
|2 PUB:(DE-HGF)
|m contb
520 _ _ |a Feedback from the previous year's very successful workshop motivated the organisation of a three-day workshop from 1 to 3 February 2016, during which the 28-rack JUQUEEN BlueGene/Q system with 458 752 cores was reserved for over 50 hours. Eight international code teams were selected to use this opportunity to investigate and improve their application scalability, assisted by staff from JSC Simulation Laboratories and Cross-Sectional Teams. Ultimately seven teams had codes successfully run on the full JUQUEEN system. Strong scalability demonstrated by Code Saturne and Seven-League Hydro, both using 4 OpenMP threads for 16 MPI processes on each compute node for a total of 1 835 008 threads, qualify them for High-Q Club membership. Existing members CIAO and iFETI were able to show that they had additional solvers which also scaled acceptably. Furthermore, large-scale in-situ interactive visualisation was demonstrated with a CIAO simulation using 458 752 MPI processes running on 28 racks coupled via JUSITU to VisIt. The two adaptive mesh refinement utilities, ICI and p4est, showed that they could respectively scale to run with 458 752 and 971 504 MPI ranks, but both encountered problems loading large meshes. Parallel file I/O issues also hindered large-scale executions of PFLOTRAN. Poor performance of a NEST-import module which loaded and connected 1.9 TiB of neuron and synapse data was tracked down to an internal data-structure mismatch with the HDF5 file objects that prevented use of MPI collective file reading, which when rectified is expected to enable large-scale neuronal network simulations.Comparative analysis is provided to the 25 codes in the High-Q Club at the start of 2016, which includes five codes that qualified from the previous workshop. Despite more mixed results, we learnt more about application file I/O limitations and inefficiencies which continue to be the primary inhibitor to large-scale simulations.
536 _ _ |a 511 - Computational Science and Mathematical Methods (POF3-511)
|0 G:(DE-HGF)POF3-511
|c POF3-511
|f POF III
|x 0
536 _ _ |0 G:(DE-Juel-1)ATMLPP
|a ATMLPP - ATML Parallel Performance (ATMLPP)
|c ATMLPP
|x 1
536 _ _ |0 G:(DE-Juel-1)ATMLAO
|a ATMLAO - ATML Application Optimization and User Service Tools (ATMLAO)
|c ATMLAO
|x 2
588 _ _ |a Dataset connected to CrossRef Conference
700 1 _ |a Frings, Wolfgang
|0 P:(DE-Juel1)132108
|b 1
|u fzj
700 1 _ |a Wylie, Brian J. N.
|0 P:(DE-Juel1)132302
|b 2
|e Corresponding author
|u fzj
770 _ _ |z 978-1-4503-4122-6
773 _ _ |a 10.1145/2938615.2938616
856 4 _ |y OpenAccess
|u https://juser.fz-juelich.de/record/818244/files/Extreme-scaling%20applications%20en%20route%20to%20exascale.pdf
856 4 _ |y OpenAccess
|x icon
|u https://juser.fz-juelich.de/record/818244/files/Extreme-scaling%20applications%20en%20route%20to%20exascale.gif?subformat=icon
856 4 _ |y OpenAccess
|x icon-1440
|u https://juser.fz-juelich.de/record/818244/files/Extreme-scaling%20applications%20en%20route%20to%20exascale.jpg?subformat=icon-1440
856 4 _ |y OpenAccess
|x icon-180
|u https://juser.fz-juelich.de/record/818244/files/Extreme-scaling%20applications%20en%20route%20to%20exascale.jpg?subformat=icon-180
856 4 _ |y OpenAccess
|x icon-640
|u https://juser.fz-juelich.de/record/818244/files/Extreme-scaling%20applications%20en%20route%20to%20exascale.jpg?subformat=icon-640
856 4 _ |y OpenAccess
|x pdfa
|u https://juser.fz-juelich.de/record/818244/files/Extreme-scaling%20applications%20en%20route%20to%20exascale.pdf?subformat=pdfa
909 C O |o oai:juser.fz-juelich.de:818244
|p openaire
|p open_access
|p driver
|p VDB
|p dnbdelivery
910 1 _ |a Forschungszentrum Jülich
|0 I:(DE-588b)5008462-8
|k FZJ
|b 0
|6 P:(DE-Juel1)143606
910 1 _ |a Forschungszentrum Jülich
|0 I:(DE-588b)5008462-8
|k FZJ
|b 1
|6 P:(DE-Juel1)132108
910 1 _ |a Forschungszentrum Jülich
|0 I:(DE-588b)5008462-8
|k FZJ
|b 2
|6 P:(DE-Juel1)132302
913 1 _ |a DE-HGF
|b Key Technologies
|1 G:(DE-HGF)POF3-510
|0 G:(DE-HGF)POF3-511
|2 G:(DE-HGF)POF3-500
|v Computational Science and Mathematical Methods
|x 0
|4 G:(DE-HGF)POF
|3 G:(DE-HGF)POF3
|l Supercomputing & Big Data
914 1 _ |y 2016
915 _ _ |a OpenAccess
|0 StatID:(DE-HGF)0510
|2 StatID
920 _ _ |l yes
920 1 _ |0 I:(DE-Juel1)JSC-20090406
|k JSC
|l Jülich Supercomputing Center
|x 0
980 _ _ |a contrib
980 _ _ |a VDB
980 _ _ |a UNRESTRICTED
980 _ _ |a contb
980 _ _ |a I:(DE-Juel1)JSC-20090406
980 1 _ |a FullTexts


LibraryCollectionCLSMajorCLSMinorLanguageAuthor
Marc 21