000152039 001__ 152039
000152039 005__ 20210129213636.0
000152039 0247_ $$2doi$$a10.3233/978-1-61499-381-0-357
000152039 0247_ $$2WOS$$aWOS:000452120400035
000152039 020__ $$a978-1-61499-380-3
000152039 037__ $$aFZJ-2014-01859
000152039 1001_ $$0P:(DE-Juel1)143606$$aBrömmel, Dirk$$b0
000152039 1112_ $$aInternational Conference on Parallel Computing$$cMunich$$d2013-09-10 - 2013-09-13$$gParCo 2013$$wGermany
000152039 245__ $$aExperience with the MPI/STARSS programming model on a large production code
000152039 260__ $$bIOS Press$$c2014
000152039 29510 $$aParallel Computing: Accelerating Computational Science and Engineering (CSE)
000152039 300__ $$a357 - 366
000152039 3367_ $$0PUB:(DE-HGF)8$$2PUB:(DE-HGF)$$aContribution to a conference proceedings$$bcontrib$$mcontrib$$s1396417935_15674
000152039 3367_ $$0PUB:(DE-HGF)7$$2PUB:(DE-HGF)$$aContribution to a book$$mcontb
000152039 3367_ $$033$$2EndNote$$aConference Paper
000152039 3367_ $$2ORCID$$aCONFERENCE_PAPER
000152039 3367_ $$2DataCite$$aOutput Types/Conference Paper
000152039 3367_ $$2DRIVER$$aconferenceObject
000152039 3367_ $$2BibTeX$$aINPROCEEDINGS
000152039 4900_ $$aAdvances in Parallel Computing$$v25
000152039 520__ $$aThis paper describes the experiences of porting a scientific application to the hybrid MPI/STARSS parallelisation. It will show that overlapping computation, I/O and communication is possible and results in a performance improvement when compared to a pure MPI approach. Essentially, this is showing the added benefit of combining shared and distributed memory programming models. We will also highlight one big advantage of the STARSS runtime which lies in dynamically adjusting the number of threads, helping to alleviate load imbalances without algorithmic restructuring.
000152039 536__ $$0G:(DE-HGF)POF2-411$$a411 - Computational Science and Mathematical Methods (POF2-411)$$cPOF2-411$$fPOF II$$x0
000152039 7001_ $$0P:(DE-Juel1)132115$$aGibbon, Paul$$b1
000152039 7001_ $$0P:(DE-HGF)0$$aGarcia, Marta$$b2
000152039 7001_ $$0P:(DE-HGF)0$$aLopez, Victor$$b3
000152039 7001_ $$0P:(DE-HGF)0$$aMarjanovic, Vladimir$$b4
000152039 7001_ $$0P:(DE-HGF)0$$aLabarta, Jesus$$b5
000152039 773__ $$a10.3233/978-1-61499-381-0-357
000152039 909CO $$ooai:juser.fz-juelich.de:152039$$pVDB
000152039 9141_ $$y2014
000152039 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)143606$$aForschungszentrum Jülich GmbH$$b0$$kFZJ
000152039 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)132115$$aForschungszentrum Jülich GmbH$$b1$$kFZJ
000152039 9132_ $$0G:(DE-HGF)POF3-511$$1G:(DE-HGF)POF3-510$$2G:(DE-HGF)POF3-500$$aDE-HGF$$bKey Technologies$$lSupercomputing & Big Data $$vComputational Science and Mathematical Methods$$x0
000152039 9131_ $$0G:(DE-HGF)POF2-411$$1G:(DE-HGF)POF2-410$$2G:(DE-HGF)POF2-400$$3G:(DE-HGF)POF2$$4G:(DE-HGF)POF$$aDE-HGF$$bSchlüsseltechnologien$$lSupercomputing$$vComputational Science and Mathematical Methods$$x0
000152039 9201_ $$0I:(DE-Juel1)JSC-20090406$$kJSC$$lJülich Supercomputing Center$$x0
000152039 980__ $$acontrib
000152039 980__ $$aVDB
000152039 980__ $$acontb
000152039 980__ $$aI:(DE-Juel1)JSC-20090406
000152039 980__ $$aUNRESTRICTED