000845928 001__ 845928
000845928 005__ 20210129233740.0
000845928 0247_ $$2doi$$a10.1142/S0129183118500109
000845928 0247_ $$2ISSN$$a0129-1831
000845928 0247_ $$2ISSN$$a1793-6586
000845928 0247_ $$2Handle$$a2128/18722
000845928 0247_ $$2WOS$$aWOS:000426590000010
000845928 0247_ $$2altmetric$$aaltmetric:31385084
000845928 037__ $$aFZJ-2018-03126
000845928 082__ $$a530
000845928 1001_ $$0P:(DE-HGF)0$$aBonati, Claudio$$b0
000845928 245__ $$aPortable multi-node LQCD Monte Carlo simulations using OpenACC
000845928 260__ $$aSingapore [u.a.]$$bWorld Scientific$$c2018
000845928 3367_ $$2DRIVER$$aarticle
000845928 3367_ $$2DataCite$$aOutput Types/Journal article
000845928 3367_ $$0PUB:(DE-HGF)16$$2PUB:(DE-HGF)$$aJournal Article$$bjournal$$mjournal$$s1544596151_28838
000845928 3367_ $$2BibTeX$$aARTICLE
000845928 3367_ $$2ORCID$$aJOURNAL_ARTICLE
000845928 3367_ $$00$$2EndNote$$aJournal Article
000845928 520__ $$aThis paper describes a state-of-the-art parallel Lattice QCD Monte Carlo code for staggered fermions, purposely designed to be portable across different computer architectures, including GPUs and commodity CPUs. Portability is achieved using the OpenACC parallel programming model, used to develop a code that can be compiled for several processor architectures. The paper focuses on parallelization on multiple computing nodes using OpenACC to manage parallelism within the node, and OpenMPI to manage parallelism among the nodes. We first discuss the available strategies to be adopted to maximize performances, we then describe selected relevant details of the code, and finally measure the level of performance and scaling-performance that we are able to achieve. The work focuses mainly on GPUs, which offer a significantly high level of performances for this application, but also compares with results measured on other processors.
000845928 536__ $$0G:(DE-HGF)POF3-511$$a511 - Computational Science and Mathematical Methods (POF3-511)$$cPOF3-511$$fPOF III$$x0
000845928 536__ $$0G:(DE-Juel1)PHD-NO-GRANT-20170405$$aPhD no Grant - Doktorand ohne besondere Förderung (PHD-NO-GRANT-20170405)$$cPHD-NO-GRANT-20170405$$x1
000845928 588__ $$aDataset connected to CrossRef
000845928 7001_ $$0P:(DE-HGF)0$$aCalore, Enrico$$b1
000845928 7001_ $$0P:(DE-HGF)0$$aD’Elia, Massimo$$b2
000845928 7001_ $$0P:(DE-HGF)0$$aMesiti, Michele$$b3
000845928 7001_ $$0P:(DE-HGF)0$$aNegro, Francesco$$b4
000845928 7001_ $$0P:(DE-HGF)0$$aSanfilippo, Francesco$$b5
000845928 7001_ $$0P:(DE-HGF)0$$aSchifano, Sebastiano Fabio$$b6
000845928 7001_ $$0P:(DE-Juel1)171116$$aSilvi, Giorgio$$b7$$eCorresponding author$$ufzj
000845928 7001_ $$0P:(DE-HGF)0$$aTripiccione, Raffaele$$b8
000845928 773__ $$0PERI:(DE-600)2006526-7$$a10.1142/S0129183118500109$$gVol. 29, no. 01, p. 1850010 -$$n01$$p1850010 -$$tInternational journal of modern physics / C$$v29$$x1793-6586$$y2018
000845928 8564_ $$uhttps://juser.fz-juelich.de/record/845928/files/1801.01473.pdf$$yOpenAccess
000845928 8564_ $$uhttps://juser.fz-juelich.de/record/845928/files/1801.01473.gif?subformat=icon$$xicon$$yOpenAccess
000845928 8564_ $$uhttps://juser.fz-juelich.de/record/845928/files/1801.01473.jpg?subformat=icon-1440$$xicon-1440$$yOpenAccess
000845928 8564_ $$uhttps://juser.fz-juelich.de/record/845928/files/1801.01473.jpg?subformat=icon-180$$xicon-180$$yOpenAccess
000845928 8564_ $$uhttps://juser.fz-juelich.de/record/845928/files/1801.01473.jpg?subformat=icon-640$$xicon-640$$yOpenAccess
000845928 8564_ $$uhttps://juser.fz-juelich.de/record/845928/files/1801.01473.pdf?subformat=pdfa$$xpdfa$$yOpenAccess
000845928 909CO $$ooai:juser.fz-juelich.de:845928$$pdnbdelivery$$pdriver$$pVDB$$popen_access$$popenaire
000845928 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)171116$$aForschungszentrum Jülich$$b7$$kFZJ
000845928 9131_ $$0G:(DE-HGF)POF3-511$$1G:(DE-HGF)POF3-510$$2G:(DE-HGF)POF3-500$$3G:(DE-HGF)POF3$$4G:(DE-HGF)POF$$aDE-HGF$$bKey Technologies$$lSupercomputing & Big Data$$vComputational Science and Mathematical Methods$$x0
000845928 9141_ $$y2018
000845928 915__ $$0StatID:(DE-HGF)0200$$2StatID$$aDBCoverage$$bSCOPUS
000845928 915__ $$0StatID:(DE-HGF)0600$$2StatID$$aDBCoverage$$bEbsco Academic Search
000845928 915__ $$0StatID:(DE-HGF)0100$$2StatID$$aJCR$$bINT J MOD PHYS C : 2015
000845928 915__ $$0StatID:(DE-HGF)0150$$2StatID$$aDBCoverage$$bWeb of Science Core Collection
000845928 915__ $$0StatID:(DE-HGF)0110$$2StatID$$aWoS$$bScience Citation Index
000845928 915__ $$0StatID:(DE-HGF)0111$$2StatID$$aWoS$$bScience Citation Index Expanded
000845928 915__ $$0StatID:(DE-HGF)9900$$2StatID$$aIF < 5
000845928 915__ $$0StatID:(DE-HGF)0510$$2StatID$$aOpenAccess
000845928 915__ $$0StatID:(DE-HGF)0030$$2StatID$$aPeer Review$$bASC
000845928 915__ $$0StatID:(DE-HGF)1150$$2StatID$$aDBCoverage$$bCurrent Contents - Physical, Chemical and Earth Sciences
000845928 915__ $$0StatID:(DE-HGF)0300$$2StatID$$aDBCoverage$$bMedline
000845928 915__ $$0StatID:(DE-HGF)0199$$2StatID$$aDBCoverage$$bThomson Reuters Master Journal List
000845928 920__ $$lyes
000845928 9201_ $$0I:(DE-Juel1)JSC-20090406$$kJSC$$lJülich Supercomputing Center$$x0
000845928 980__ $$ajournal
000845928 980__ $$aVDB
000845928 980__ $$aI:(DE-Juel1)JSC-20090406
000845928 980__ $$aUNRESTRICTED
000845928 9801_ $$aFullTexts