001028864 001__ 1028864
001028864 005__ 20241218210659.0
001028864 0247_ $$2datacite_doi$$a10.34734/FZJ-2024-04850
001028864 037__ $$aFZJ-2024-04850
001028864 1001_ $$0P:(DE-Juel1)190575$$aBaumann, Thomas$$b0$$eCorresponding author$$ufzj
001028864 1112_ $$a16th JLESC Workshop$$cKobe$$d2024-04-16 - 2024-04-18$$gJLESC16$$wJapan
001028864 245__ $$aPorting mpi4py-fft to GPU
001028864 260__ $$c2024
001028864 3367_ $$033$$2EndNote$$aConference Paper
001028864 3367_ $$2DataCite$$aOther
001028864 3367_ $$2BibTeX$$aINPROCEEDINGS
001028864 3367_ $$2DRIVER$$aconferenceObject
001028864 3367_ $$2ORCID$$aLECTURE_SPEECH
001028864 3367_ $$0PUB:(DE-HGF)6$$2PUB:(DE-HGF)$$aConference Presentation$$bconf$$mconf$$s1734522108_24846$$xAfter Call
001028864 520__ $$aThe mpi4py-fft library enables distributed fast Fourier transforms on CPUs with an easy to use interface and scales very well. We attempt to port this to GPUs, which significantly outperform the CPU counterpart at a given node count. While the porting is straightforward for the most part, the best communication strategy is still an open question for us.The algorithm relies on MPI alltoallw. Even with CUDA-aware MPI, this exhibits very poor performance on the Juelich computers. By replacing it with a custom communication strategy, throughput can be increased at a slight loss of generality. We would like to discuss optimising the strategy, or even if the performance of alltoallw can be increased by some measure.
001028864 536__ $$0G:(DE-HGF)POF4-5112$$a5112 - Cross-Domain Algorithms, Tools, Methods Labs (ATMLs) and Research Groups (POF4-511)$$cPOF4-511$$fPOF IV$$x0
001028864 536__ $$0G:(DE-Juel1)JLESC-20150708$$aJLESC - Joint Laboratory for Extreme Scale Computing (JLESC-20150708)$$cJLESC-20150708$$fJLESC$$x1
001028864 536__ $$0G:(DE-Juel-1)RG-RSE$$aRGRSE - RG Research Software Engineering for HPC (RG RSE) (RG-RSE)$$cRG-RSE$$x2
001028864 7001_ $$0P:(DE-Juel1)132268$$aSpeck, Robert$$b1$$ufzj
001028864 8564_ $$uhttps://juser.fz-juelich.de/record/1028864/files/mpi4pyFFT_GPU.pdf$$yOpenAccess
001028864 8564_ $$uhttps://juser.fz-juelich.de/record/1028864/files/mpi4pyFFT_GPU.gif?subformat=icon$$xicon$$yOpenAccess
001028864 8564_ $$uhttps://juser.fz-juelich.de/record/1028864/files/mpi4pyFFT_GPU.jpg?subformat=icon-1440$$xicon-1440$$yOpenAccess
001028864 8564_ $$uhttps://juser.fz-juelich.de/record/1028864/files/mpi4pyFFT_GPU.jpg?subformat=icon-180$$xicon-180$$yOpenAccess
001028864 8564_ $$uhttps://juser.fz-juelich.de/record/1028864/files/mpi4pyFFT_GPU.jpg?subformat=icon-640$$xicon-640$$yOpenAccess
001028864 909CO $$ooai:juser.fz-juelich.de:1028864$$pdriver$$pVDB$$popen_access$$popenaire
001028864 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)190575$$aForschungszentrum Jülich$$b0$$kFZJ
001028864 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)132268$$aForschungszentrum Jülich$$b1$$kFZJ
001028864 9131_ $$0G:(DE-HGF)POF4-511$$1G:(DE-HGF)POF4-510$$2G:(DE-HGF)POF4-500$$3G:(DE-HGF)POF4$$4G:(DE-HGF)POF$$9G:(DE-HGF)POF4-5112$$aDE-HGF$$bKey Technologies$$lEngineering Digital Futures – Supercomputing, Data Management and Information Security for Knowledge and Action$$vEnabling Computational- & Data-Intensive Science and Engineering$$x0
001028864 9141_ $$y2024
001028864 915__ $$0StatID:(DE-HGF)0510$$2StatID$$aOpenAccess
001028864 920__ $$lyes
001028864 9201_ $$0I:(DE-Juel1)JSC-20090406$$kJSC$$lJülich Supercomputing Center$$x0
001028864 980__ $$aconf
001028864 980__ $$aVDB
001028864 980__ $$aI:(DE-Juel1)JSC-20090406
001028864 980__ $$aUNRESTRICTED
001028864 9801_ $$aFullTexts