| Hauptseite > Publikationsdatenbank > A massively parallel, multi-disciplinary Barnes-Hut tree code for extreme-scale N-body simulations > print |
| 001 | 17919 | ||
| 005 | 20210129210704.0 | ||
| 024 | 7 | _ | |2 DOI |a 10.1016/j.cpc.2011.12.013 |
| 024 | 7 | _ | |2 WOS |a WOS:000301028700004 |
| 037 | _ | _ | |a PreJuSER-17919 |
| 041 | _ | _ | |a eng |
| 082 | _ | _ | |a 004 |
| 084 | _ | _ | |2 WoS |a Computer Science, Interdisciplinary Applications |
| 084 | _ | _ | |2 WoS |a Physics, Mathematical |
| 100 | 1 | _ | |0 P:(DE-Juel1)140128 |a Winkel, M. |b 0 |u FZJ |
| 245 | _ | _ | |a A massively parallel, multi-disciplinary Barnes-Hut tree code for extreme-scale N-body simulations |
| 260 | _ | _ | |a Amsterdam |b North Holland Publ. Co. |c 2012 |
| 300 | _ | _ | |a 880 - 889 |
| 336 | 7 | _ | |a Journal Article |0 PUB:(DE-HGF)16 |2 PUB:(DE-HGF) |
| 336 | 7 | _ | |a Output Types/Journal article |2 DataCite |
| 336 | 7 | _ | |a Journal Article |0 0 |2 EndNote |
| 336 | 7 | _ | |a ARTICLE |2 BibTeX |
| 336 | 7 | _ | |a JOURNAL_ARTICLE |2 ORCID |
| 336 | 7 | _ | |a article |2 DRIVER |
| 440 | _ | 0 | |0 1439 |a Computer Physics Communications |v 183 |x 0010-4655 |y 4 |
| 500 | _ | _ | |a The authors gratefully acknowledge the helpful support by Julich Supercomputing Centre and the JSC staff, especially M. Stephan and J. Docter. This work was supported in part by the Alliance Program of the Helmholtz Association (HA216/EMMI), the BMBF project ScaFaCoS and the EU TEXT project, as well as additional computing time via the VSR project JZAM04. R.S. and R.K. would like to thank the Swiss Platform for High-Performance and High-Productivity Computing (HP2C) for funding and support. |
| 520 | _ | _ | |a The efficient parallelization of fast multipole-based algorithms for the N-body problem is one of the most challenging topics in high performance scientific computing. The emergence of non-local, irregular communication patterns generated by these algorithms can easily create an insurmountable bottleneck on supercomputers with hundreds of thousands of cores. To overcome this obstacle we have developed an innovative parallelization strategy for Barnes-Hut tree codes on present and upcoming HPC multicore architectures. This scheme, based on a combined MPI-Pthreads approach, permits an efficient overlap of computation and data exchange. We highlight the capabilities of this method on the full IBM Blue Gene/P system JUGENE at inch Supercomputing Centre and demonstrate scaling across 299,912 cores with up to 2,048,000,000 particles. Applying our implementation PEPC to laser-plasma interaction and vortex particle methods close to the continuum limit, we demonstrate its potential for ground-breaking advances in large-scale particle simulations. (C) 2011 Elsevier B.V. All rights reserved. |
| 536 | _ | _ | |0 G:(DE-Juel1)FUEK411 |2 G:(DE-HGF) |x 0 |c FUEK411 |a Scientific Computing (FUEK411) |
| 536 | _ | _ | |0 G:(DE-HGF)POF2-411 |a 411 - Computational Science and Mathematical Methods (POF2-411) |c POF2-411 |f POF II |x 1 |
| 588 | _ | _ | |a Dataset connected to Web of Science |
| 650 | _ | 7 | |2 WoSType |a J |
| 653 | 2 | 0 | |2 Author |a Parallel Barnes-Hut tree code |
| 653 | 2 | 0 | |2 Author |a Blue Gene/P |
| 653 | 2 | 0 | |2 Author |a Hybrid |
| 653 | 2 | 0 | |2 Author |a Load balancing |
| 653 | 2 | 0 | |2 Author |a Vortex methods |
| 653 | 2 | 0 | |2 Author |a Pthreads |
| 700 | 1 | _ | |0 P:(DE-HGF)0 |a Speck, R. |b 1 |
| 700 | 1 | _ | |0 P:(DE-Juel1)VDB99128 |a Hübner, H. |b 2 |u FZJ |
| 700 | 1 | _ | |0 P:(DE-Juel1)132044 |a Arnold, L. |b 3 |u FZJ |
| 700 | 1 | _ | |0 P:(DE-HGF)0 |a Krause, R. |b 4 |
| 700 | 1 | _ | |0 P:(DE-Juel1)132115 |a Gibbon, P. |b 5 |u FZJ |
| 773 | _ | _ | |0 PERI:(DE-600)1466511-6 |a 10.1016/j.cpc.2011.12.013 |g Vol. 183, p. 880 - 889 |p 880 - 889 |q 183<880 - 889 |t Computer physics communications |v 183 |x 0010-4655 |y 2012 |
| 856 | 7 | _ | |u http://dx.doi.org/10.1016/j.cpc.2011.12.013 |
| 909 | C | O | |o oai:juser.fz-juelich.de:17919 |p VDB |
| 913 | 2 | _ | |0 G:(DE-HGF)POF3-511 |1 G:(DE-HGF)POF3-510 |2 G:(DE-HGF)POF3-500 |a DE-HGF |b Key Technologies |l Supercomputing & Big Data |v Computational Science and Mathematical Methods |x 0 |
| 913 | 1 | _ | |0 G:(DE-HGF)POF2-411 |1 G:(DE-HGF)POF2-410 |2 G:(DE-HGF)POF2-400 |a DE-HGF |b Schlüsseltechnologien |l Supercomputing |v Computational Science and Mathematical Methods |x 1 |4 G:(DE-HGF)POF |3 G:(DE-HGF)POF2 |
| 914 | 1 | _ | |y 2012 |
| 915 | _ | _ | |0 StatID:(DE-HGF)0010 |2 StatID |a JCR/ISI refereed |
| 915 | _ | _ | |0 StatID:(DE-HGF)0100 |2 StatID |a JCR |
| 915 | _ | _ | |0 StatID:(DE-HGF)0110 |2 StatID |a WoS |b Science Citation Index |
| 915 | _ | _ | |0 StatID:(DE-HGF)0111 |2 StatID |a WoS |b Science Citation Index Expanded |
| 915 | _ | _ | |0 StatID:(DE-HGF)0150 |2 StatID |a DBCoverage |b Web of Science Core Collection |
| 915 | _ | _ | |0 StatID:(DE-HGF)0199 |2 StatID |a DBCoverage |b Thomson Reuters Master Journal List |
| 915 | _ | _ | |0 StatID:(DE-HGF)0200 |2 StatID |a DBCoverage |b SCOPUS |
| 915 | _ | _ | |0 StatID:(DE-HGF)0300 |2 StatID |a DBCoverage |b Medline |
| 915 | _ | _ | |0 StatID:(DE-HGF)0310 |2 StatID |a DBCoverage |b NCBI Molecular Biology Database |
| 915 | _ | _ | |0 StatID:(DE-HGF)0420 |2 StatID |a Nationallizenz |
| 915 | _ | _ | |0 StatID:(DE-HGF)1020 |2 StatID |a DBCoverage |b Current Contents - Social and Behavioral Sciences |
| 920 | 1 | _ | |0 I:(DE-Juel1)JSC-20090406 |g JSC |k JSC |l Jülich Supercomputing Centre |x 0 |
| 970 | _ | _ | |a VDB:(DE-Juel1)132495 |
| 980 | _ | _ | |a VDB |
| 980 | _ | _ | |a ConvertedRecord |
| 980 | _ | _ | |a journal |
| 980 | _ | _ | |a I:(DE-Juel1)JSC-20090406 |
| 980 | _ | _ | |a UNRESTRICTED |
| Library | Collection | CLSMajor | CLSMinor | Language | Author |
|---|