001     888549
005     20210130011017.0
024 7 _ |a 2128/26551
|2 Handle
037 _ _ |a FZJ-2020-05013
041 _ _ |a English
100 1 _ |a Vogelsang, Jan
|0 P:(DE-Juel1)173676
|b 0
|e Corresponding author
245 _ _ |a A concept study of a flexible asynchronous scheduler for dynamic Earth System Model components on heterogeneous HPC systems
|f - 2020-11-20
260 _ _ |c 2020
300 _ _ |a 68
336 7 _ |a bachelorThesis
|2 DRIVER
336 7 _ |a Thesis
|0 2
|2 EndNote
336 7 _ |a Output Types/Supervised Student Publication
|2 DataCite
336 7 _ |a Bachelor Thesis
|b bachelor
|m bachelor
|0 PUB:(DE-HGF)2
|s 1608044068_28323
|2 PUB:(DE-HGF)
336 7 _ |a MASTERSTHESIS
|2 BibTeX
336 7 _ |a SUPERVISED_STUDENT_PUBLICATION
|2 ORCID
502 _ _ |a Bachelorarbeit, FH Aachen, 2020
|c FH Aachen
|b Bachelorarbeit
|d 2020
|o 2020-11-20
520 _ _ |a Climate change is presenting one of the biggest challenges for mankind as recent developments have shown. Since 1990, the global temperature increased by almost 1°C and earth system model projections indicate that temperatures will raise another two to four degrees by 2100 unless drastic measures are taken quickly to avoid greenhouse gas emissions. Since several decades, scientists have constructed numerical models to simulate weather and climate. Such models describe physical and biogeochemical processes in the atmosphere, the ocean, the land surface and the cryosphere and are thus called earth system models (ESM).In that same period of time the computing power of the worlds largest supercomputers has rocketed upwards as today’s fastest supercomputer offers 170.000 times more computing power than the fastest one 20 years ago. By utilizing this enormouscomputing power, it has become possible to simulate high-resolution weather and climate models that are capable of predicting extreme events with reasonable accuracy.As the computing power of modern supercomputers increases rapidly, so does the complexity of the underlying architecture. Specialized nodes equipped with new technology like graphical processing units allow a massive reduction of computing time for many problem classes, but do also introduce the challenge to work with a heterogeneous architecture. ESMs were hitherto designed for homogeneous architectures based central processing units. Along with the increased computational demands ofESMs, the amount of generated data does as well, leading to the phenomenon that the cost of data movement starts to dominate the overall cost of computation. Any new programming paradigm for ESMs must therefore try to minimize massive data transfers, e.g. by utilizing data locality as it will be demonstrated in this work.Facing the challenges of modern supercomputer architecture and the need for more flexible and modular models, completely new programming concepts are needed, as demonstrated by the Helmholtz project Pilot Lab Exascale Earth System Modelling(PL-ExaESM) in which context this work has been conducted.As a potential solution for these challenges a new, asynchronous scheduling method for modular ESM components has been tested in a sandbox environment and evaluated with respects to performance, scalability and flexibility. Asynchronous scheduling allows for a better exploitation of the heterogeneous resources of a modern HPC system. Through careful consideration of data flow paths across the coupled pseudoESM components, data movement could be reduced by more than 50% compared to a traditional sequential ESM workflow.Furthermore, running different example workflows showed a high efficiency gain for complex workflows when increasing the number of nodes used for computation. The results obtained here are promising, however not yet sufficient to propose asynchronous scheduling as the one new ESM paradigm to be used for upcoming exascale earth system modelling. Further development and investigation following the approach proposed in this work is required to evaluate the usability on different architectures and comparing it to different approaches meeting the introduced challenges of modern ESM development.
536 _ _ |a 512 - Data-Intensive Science and Federated Computing (POF3-512)
|0 G:(DE-HGF)POF3-512
|c POF3-512
|f POF III
|x 0
856 4 _ |u https://juser.fz-juelich.de/record/888549/files/Bachelor%27s%20thesis%20Vogelsang.pdf
|y OpenAccess
909 C O |o oai:juser.fz-juelich.de:888549
|p openaire
|p open_access
|p VDB
|p driver
|p dnbdelivery
910 1 _ |a Forschungszentrum Jülich
|0 I:(DE-588b)5008462-8
|k FZJ
|b 0
|6 P:(DE-Juel1)173676
913 1 _ |a DE-HGF
|b Key Technologies
|1 G:(DE-HGF)POF3-510
|0 G:(DE-HGF)POF3-512
|2 G:(DE-HGF)POF3-500
|v Data-Intensive Science and Federated Computing
|x 0
|4 G:(DE-HGF)POF
|3 G:(DE-HGF)POF3
|l Supercomputing & Big Data
914 1 _ |y 2020
915 _ _ |a OpenAccess
|0 StatID:(DE-HGF)0510
|2 StatID
920 _ _ |l yes
920 1 _ |0 I:(DE-Juel1)JSC-20090406
|k JSC
|l Jülich Supercomputing Center
|x 0
980 _ _ |a bachelor
980 _ _ |a VDB
980 _ _ |a UNRESTRICTED
980 _ _ |a I:(DE-Juel1)JSC-20090406
980 1 _ |a FullTexts


LibraryCollectionCLSMajorCLSMinorLanguageAuthor
Marc 21