000818244 001__ 818244 000818244 005__ 20250317091730.0 000818244 0247_ $$2doi$$a10.1145/2938615.2938616 000818244 0247_ $$2Handle$$a2128/12345 000818244 037__ $$aFZJ-2016-04722 000818244 1001_ $$0P:(DE-Juel1)143606$$aBrömmel, Dirk$$b0$$ufzj 000818244 1112_ $$aExascale Applications and Software Conference 2016$$cStockholm$$d04/26/2016 - 04/29/2016$$gEASC'16$$wSweden 000818244 245__ $$aExtreme-scaling applications en route to exascale 000818244 260__ $$aNew York$$bACM Press$$c2016 000818244 29510 $$aProceedings of the Exascale Applications and Software Conference 2016 000818244 300__ $$a10 000818244 3367_ $$2ORCID$$aCONFERENCE_PAPER 000818244 3367_ $$033$$2EndNote$$aConference Paper 000818244 3367_ $$2BibTeX$$aINPROCEEDINGS 000818244 3367_ $$2DRIVER$$aconferenceObject 000818244 3367_ $$2DataCite$$aOutput Types/Conference Paper 000818244 3367_ $$0PUB:(DE-HGF)8$$2PUB:(DE-HGF)$$aContribution to a conference proceedings$$bcontrib$$mcontrib$$s1474526747_22148 000818244 3367_ $$0PUB:(DE-HGF)7$$2PUB:(DE-HGF)$$aContribution to a book$$mcontb 000818244 520__ $$aFeedback from the previous year's very successful workshop motivated the organisation of a three-day workshop from 1 to 3 February 2016, during which the 28-rack JUQUEEN BlueGene/Q system with 458 752 cores was reserved for over 50 hours. Eight international code teams were selected to use this opportunity to investigate and improve their application scalability, assisted by staff from JSC Simulation Laboratories and Cross-Sectional Teams. Ultimately seven teams had codes successfully run on the full JUQUEEN system. Strong scalability demonstrated by Code Saturne and Seven-League Hydro, both using 4 OpenMP threads for 16 MPI processes on each compute node for a total of 1 835 008 threads, qualify them for High-Q Club membership. Existing members CIAO and iFETI were able to show that they had additional solvers which also scaled acceptably. Furthermore, large-scale in-situ interactive visualisation was demonstrated with a CIAO simulation using 458 752 MPI processes running on 28 racks coupled via JUSITU to VisIt. The two adaptive mesh refinement utilities, ICI and p4est, showed that they could respectively scale to run with 458 752 and 971 504 MPI ranks, but both encountered problems loading large meshes. Parallel file I/O issues also hindered large-scale executions of PFLOTRAN. Poor performance of a NEST-import module which loaded and connected 1.9 TiB of neuron and synapse data was tracked down to an internal data-structure mismatch with the HDF5 file objects that prevented use of MPI collective file reading, which when rectified is expected to enable large-scale neuronal network simulations.Comparative analysis is provided to the 25 codes in the High-Q Club at the start of 2016, which includes five codes that qualified from the previous workshop. Despite more mixed results, we learnt more about application file I/O limitations and inefficiencies which continue to be the primary inhibitor to large-scale simulations. 000818244 536__ $$0G:(DE-HGF)POF3-511$$a511 - Computational Science and Mathematical Methods (POF3-511)$$cPOF3-511$$fPOF III$$x0 000818244 536__ $$0G:(DE-Juel-1)ATMLPP$$aATMLPP - ATML Parallel Performance (ATMLPP)$$cATMLPP$$x1 000818244 536__ $$0G:(DE-Juel-1)ATMLAO$$aATMLAO - ATML Application Optimization and User Service Tools (ATMLAO)$$cATMLAO$$x2 000818244 588__ $$aDataset connected to CrossRef Conference 000818244 7001_ $$0P:(DE-Juel1)132108$$aFrings, Wolfgang$$b1$$ufzj 000818244 7001_ $$0P:(DE-Juel1)132302$$aWylie, Brian J. N.$$b2$$eCorresponding author$$ufzj 000818244 770__ $$z978-1-4503-4122-6 000818244 773__ $$a10.1145/2938615.2938616 000818244 8564_ $$uhttps://juser.fz-juelich.de/record/818244/files/Extreme-scaling%20applications%20en%20route%20to%20exascale.pdf$$yOpenAccess 000818244 8564_ $$uhttps://juser.fz-juelich.de/record/818244/files/Extreme-scaling%20applications%20en%20route%20to%20exascale.gif?subformat=icon$$xicon$$yOpenAccess 000818244 8564_ $$uhttps://juser.fz-juelich.de/record/818244/files/Extreme-scaling%20applications%20en%20route%20to%20exascale.jpg?subformat=icon-1440$$xicon-1440$$yOpenAccess 000818244 8564_ $$uhttps://juser.fz-juelich.de/record/818244/files/Extreme-scaling%20applications%20en%20route%20to%20exascale.jpg?subformat=icon-180$$xicon-180$$yOpenAccess 000818244 8564_ $$uhttps://juser.fz-juelich.de/record/818244/files/Extreme-scaling%20applications%20en%20route%20to%20exascale.jpg?subformat=icon-640$$xicon-640$$yOpenAccess 000818244 8564_ $$uhttps://juser.fz-juelich.de/record/818244/files/Extreme-scaling%20applications%20en%20route%20to%20exascale.pdf?subformat=pdfa$$xpdfa$$yOpenAccess 000818244 909CO $$ooai:juser.fz-juelich.de:818244$$pdnbdelivery$$pVDB$$pdriver$$popen_access$$popenaire 000818244 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)143606$$aForschungszentrum Jülich$$b0$$kFZJ 000818244 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)132108$$aForschungszentrum Jülich$$b1$$kFZJ 000818244 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)132302$$aForschungszentrum Jülich$$b2$$kFZJ 000818244 9131_ $$0G:(DE-HGF)POF3-511$$1G:(DE-HGF)POF3-510$$2G:(DE-HGF)POF3-500$$3G:(DE-HGF)POF3$$4G:(DE-HGF)POF$$aDE-HGF$$bKey Technologies$$lSupercomputing & Big Data$$vComputational Science and Mathematical Methods$$x0 000818244 9141_ $$y2016 000818244 915__ $$0StatID:(DE-HGF)0510$$2StatID$$aOpenAccess 000818244 920__ $$lyes 000818244 9201_ $$0I:(DE-Juel1)JSC-20090406$$kJSC$$lJülich Supercomputing Center$$x0 000818244 980__ $$acontrib 000818244 980__ $$aVDB 000818244 980__ $$aUNRESTRICTED 000818244 980__ $$acontb 000818244 980__ $$aI:(DE-Juel1)JSC-20090406 000818244 9801_ $$aFullTexts