000888549 001__ 888549
000888549 005__ 20210130011017.0
000888549 0247_ $$2Handle$$a2128/26551
000888549 037__ $$aFZJ-2020-05013
000888549 041__ $$aEnglish
000888549 1001_ $$0P:(DE-Juel1)173676$$aVogelsang, Jan$$b0$$eCorresponding author
000888549 245__ $$aA concept study of a flexible asynchronous scheduler for dynamic Earth System Model components on heterogeneous HPC systems$$f - 2020-11-20
000888549 260__ $$c2020
000888549 300__ $$a68
000888549 3367_ $$2DRIVER$$abachelorThesis
000888549 3367_ $$02$$2EndNote$$aThesis
000888549 3367_ $$2DataCite$$aOutput Types/Supervised Student Publication
000888549 3367_ $$0PUB:(DE-HGF)2$$2PUB:(DE-HGF)$$aBachelor Thesis$$bbachelor$$mbachelor$$s1608044068_28323
000888549 3367_ $$2BibTeX$$aMASTERSTHESIS
000888549 3367_ $$2ORCID$$aSUPERVISED_STUDENT_PUBLICATION
000888549 502__ $$aBachelorarbeit, FH Aachen, 2020$$bBachelorarbeit$$cFH Aachen$$d2020$$o2020-11-20
000888549 520__ $$aClimate change is presenting one of the biggest challenges for mankind as recent developments have shown. Since 1990, the global temperature increased by almost 1°C and earth system model projections indicate that temperatures will raise another two to four degrees by 2100 unless drastic measures are taken quickly to avoid greenhouse gas emissions. Since several decades, scientists have constructed numerical models to simulate weather and climate. Such models describe physical and biogeochemical processes in the atmosphere, the ocean, the land surface and the cryosphere and are thus called earth system models (ESM).In that same period of time the computing power of the worlds largest supercomputers has rocketed upwards as today’s fastest supercomputer offers 170.000 times more computing power than the fastest one 20 years ago. By utilizing this enormouscomputing power, it has become possible to simulate high-resolution weather and climate models that are capable of predicting extreme events with reasonable accuracy.As the computing power of modern supercomputers increases rapidly, so does the complexity of the underlying architecture. Specialized nodes equipped with new technology like graphical processing units allow a massive reduction of computing time for many problem classes, but do also introduce the challenge to work with a heterogeneous architecture. ESMs were hitherto designed for homogeneous architectures based central processing units. Along with the increased computational demands ofESMs, the amount of generated data does as well, leading to the phenomenon that the cost of data movement starts to dominate the overall cost of computation. Any new programming paradigm for ESMs must therefore try to minimize massive data transfers, e.g. by utilizing data locality as it will be demonstrated in this work.Facing the challenges of modern supercomputer architecture and the need for more flexible and modular models, completely new programming concepts are needed, as demonstrated by the Helmholtz project Pilot Lab Exascale Earth System Modelling(PL-ExaESM) in which context this work has been conducted.As a potential solution for these challenges a new, asynchronous scheduling method for modular ESM components has been tested in a sandbox environment and evaluated with respects to performance, scalability and flexibility. Asynchronous scheduling allows for a better exploitation of the heterogeneous resources of a modern HPC system. Through careful consideration of data flow paths across the coupled pseudoESM components, data movement could be reduced by more than 50% compared to a traditional sequential ESM workflow.Furthermore, running different example workflows showed a high efficiency gain for complex workflows when increasing the number of nodes used for computation. The results obtained here are promising, however not yet sufficient to propose asynchronous scheduling as the one new ESM paradigm to be used for upcoming exascale earth system modelling. Further development and investigation following the approach proposed in this work is required to evaluate the usability on different architectures and comparing it to different approaches meeting the introduced challenges of modern ESM development.
000888549 536__ $$0G:(DE-HGF)POF3-512$$a512 - Data-Intensive Science and Federated Computing (POF3-512)$$cPOF3-512$$fPOF III$$x0
000888549 8564_ $$uhttps://juser.fz-juelich.de/record/888549/files/Bachelor%27s%20thesis%20Vogelsang.pdf$$yOpenAccess
000888549 909CO $$ooai:juser.fz-juelich.de:888549$$pdnbdelivery$$pdriver$$pVDB$$popen_access$$popenaire
000888549 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)173676$$aForschungszentrum Jülich$$b0$$kFZJ
000888549 9131_ $$0G:(DE-HGF)POF3-512$$1G:(DE-HGF)POF3-510$$2G:(DE-HGF)POF3-500$$3G:(DE-HGF)POF3$$4G:(DE-HGF)POF$$aDE-HGF$$bKey Technologies$$lSupercomputing & Big Data$$vData-Intensive Science and Federated Computing$$x0
000888549 9141_ $$y2020
000888549 915__ $$0StatID:(DE-HGF)0510$$2StatID$$aOpenAccess
000888549 920__ $$lyes
000888549 9201_ $$0I:(DE-Juel1)JSC-20090406$$kJSC$$lJülich Supercomputing Center$$x0
000888549 980__ $$abachelor
000888549 980__ $$aVDB
000888549 980__ $$aUNRESTRICTED
000888549 980__ $$aI:(DE-Juel1)JSC-20090406
000888549 9801_ $$aFullTexts