Home > Publications database > Porting mpi4py-fft to GPU > print |
001 | 1028864 | ||
005 | 20241218210659.0 | ||
024 | 7 | _ | |a 10.34734/FZJ-2024-04850 |2 datacite_doi |
037 | _ | _ | |a FZJ-2024-04850 |
100 | 1 | _ | |a Baumann, Thomas |0 P:(DE-Juel1)190575 |b 0 |e Corresponding author |u fzj |
111 | 2 | _ | |a 16th JLESC Workshop |g JLESC16 |c Kobe |d 2024-04-16 - 2024-04-18 |w Japan |
245 | _ | _ | |a Porting mpi4py-fft to GPU |
260 | _ | _ | |c 2024 |
336 | 7 | _ | |a Conference Paper |0 33 |2 EndNote |
336 | 7 | _ | |a Other |2 DataCite |
336 | 7 | _ | |a INPROCEEDINGS |2 BibTeX |
336 | 7 | _ | |a conferenceObject |2 DRIVER |
336 | 7 | _ | |a LECTURE_SPEECH |2 ORCID |
336 | 7 | _ | |a Conference Presentation |b conf |m conf |0 PUB:(DE-HGF)6 |s 1734522108_24846 |2 PUB:(DE-HGF) |x After Call |
520 | _ | _ | |a The mpi4py-fft library enables distributed fast Fourier transforms on CPUs with an easy to use interface and scales very well. We attempt to port this to GPUs, which significantly outperform the CPU counterpart at a given node count. While the porting is straightforward for the most part, the best communication strategy is still an open question for us.The algorithm relies on MPI alltoallw. Even with CUDA-aware MPI, this exhibits very poor performance on the Juelich computers. By replacing it with a custom communication strategy, throughput can be increased at a slight loss of generality. We would like to discuss optimising the strategy, or even if the performance of alltoallw can be increased by some measure. |
536 | _ | _ | |a 5112 - Cross-Domain Algorithms, Tools, Methods Labs (ATMLs) and Research Groups (POF4-511) |0 G:(DE-HGF)POF4-5112 |c POF4-511 |f POF IV |x 0 |
536 | _ | _ | |a JLESC - Joint Laboratory for Extreme Scale Computing (JLESC-20150708) |0 G:(DE-Juel1)JLESC-20150708 |c JLESC-20150708 |f JLESC |x 1 |
536 | _ | _ | |a RGRSE - RG Research Software Engineering for HPC (RG RSE) (RG-RSE) |0 G:(DE-Juel-1)RG-RSE |c RG-RSE |x 2 |
700 | 1 | _ | |a Speck, Robert |0 P:(DE-Juel1)132268 |b 1 |u fzj |
856 | 4 | _ | |u https://juser.fz-juelich.de/record/1028864/files/mpi4pyFFT_GPU.pdf |y OpenAccess |
856 | 4 | _ | |u https://juser.fz-juelich.de/record/1028864/files/mpi4pyFFT_GPU.gif?subformat=icon |x icon |y OpenAccess |
856 | 4 | _ | |u https://juser.fz-juelich.de/record/1028864/files/mpi4pyFFT_GPU.jpg?subformat=icon-1440 |x icon-1440 |y OpenAccess |
856 | 4 | _ | |u https://juser.fz-juelich.de/record/1028864/files/mpi4pyFFT_GPU.jpg?subformat=icon-180 |x icon-180 |y OpenAccess |
856 | 4 | _ | |u https://juser.fz-juelich.de/record/1028864/files/mpi4pyFFT_GPU.jpg?subformat=icon-640 |x icon-640 |y OpenAccess |
909 | C | O | |o oai:juser.fz-juelich.de:1028864 |p openaire |p open_access |p VDB |p driver |
910 | 1 | _ | |a Forschungszentrum Jülich |0 I:(DE-588b)5008462-8 |k FZJ |b 0 |6 P:(DE-Juel1)190575 |
910 | 1 | _ | |a Forschungszentrum Jülich |0 I:(DE-588b)5008462-8 |k FZJ |b 1 |6 P:(DE-Juel1)132268 |
913 | 1 | _ | |a DE-HGF |b Key Technologies |l Engineering Digital Futures – Supercomputing, Data Management and Information Security for Knowledge and Action |1 G:(DE-HGF)POF4-510 |0 G:(DE-HGF)POF4-511 |3 G:(DE-HGF)POF4 |2 G:(DE-HGF)POF4-500 |4 G:(DE-HGF)POF |v Enabling Computational- & Data-Intensive Science and Engineering |9 G:(DE-HGF)POF4-5112 |x 0 |
914 | 1 | _ | |y 2024 |
915 | _ | _ | |a OpenAccess |0 StatID:(DE-HGF)0510 |2 StatID |
920 | _ | _ | |l yes |
920 | 1 | _ | |0 I:(DE-Juel1)JSC-20090406 |k JSC |l Jülich Supercomputing Center |x 0 |
980 | _ | _ | |a conf |
980 | _ | _ | |a VDB |
980 | _ | _ | |a I:(DE-Juel1)JSC-20090406 |
980 | _ | _ | |a UNRESTRICTED |
980 | 1 | _ | |a FullTexts |
Library | Collection | CLSMajor | CLSMinor | Language | Author |
---|