000141420 001__ 141420
000141420 005__ 20210129212933.0
000141420 020__ $$a978-3-89336-849-5
000141420 037__ $$aFZJ-2013-06600
000141420 041__ $$aeng
000141420 082__ $$a500
000141420 1001_ $$0P:(DE-HGF)0$$aTeijeiro, Carlos$$b0$$eCorresponding author
000141420 1112_ $$aHybrid Particle-Continuum Methods in Computational Materials Physics$$cJülich$$d2013-03-04 - 2013-03-07$$wGermany
000141420 245__ $$aParallel Brownian Dynamics Simulation with MPI, OpenMP and UPC
000141420 260__ $$aJülich$$bJohn von Neumann Institute for Computing (NIC)$$c2013
000141420 29510 $$aHybrid Particle-Continuum Methods in Computational Materials Physics
000141420 300__ $$a25 - 40
000141420 3367_ $$0PUB:(DE-HGF)8$$2PUB:(DE-HGF)$$aContribution to a conference proceedings$$bcontrib$$mcontrib$$s1389087308_22881
000141420 3367_ $$0PUB:(DE-HGF)7$$2PUB:(DE-HGF)$$aContribution to a book$$mcontb
000141420 3367_ $$033$$2EndNote$$aConference Paper
000141420 3367_ $$2ORCID$$aCONFERENCE_PAPER
000141420 3367_ $$2DataCite$$aOutput Types/Conference Paper
000141420 3367_ $$2DRIVER$$aconferenceObject
000141420 3367_ $$2BibTeX$$aINPROCEEDINGS
000141420 4900_ $$aNIC Series$$v46
000141420 520__ $$aThis work presents the design and implementation of a parallel simulation code for the Brownian motion of particles in a fluid. Three different parallelization approaches have been followed:
(1) traditional distributed memory message-passing programming with MPI, (2) a directivebased approach on shared memory with OpenMP, and (3) the Partitioned Global Address Space (PGAS) programming model, oriented towards hybrid shared/distributed memory systems, with the Unified Parallel C (UPC) language. According to the selected environment, different domain decompositions and work distributions are studied in terms of efficiency and programmability in order to select the most suitable strategy. Performance results on different testbeds and using a large number of threads are presented in order to assess the performance and scalability of the parallel solutions.
000141420 536__ $$0G:(DE-HGF)POF2-411$$a411 - Computational Science and Mathematical Methods (POF2-411)$$cPOF2-411$$fPOF II$$x0
000141420 588__ $$aDataset connected to GVK,
000141420 7001_ $$0P:(DE-Juel1)132274$$aSutmann, Godehard$$b1
000141420 7001_ $$0P:(DE-HGF)0$$aTaboada, Guillermo L.$$b2
000141420 7001_ $$0P:(DE-HGF)0$$aTourino, Juan$$b3
000141420 909CO $$ooai:juser.fz-juelich.de:141420$$pVDB
000141420 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)132274$$aForschungszentrum Jülich GmbH$$b1$$kFZJ
000141420 9132_ $$0G:(DE-HGF)POF3-511$$1G:(DE-HGF)POF3-510$$2G:(DE-HGF)POF3-500$$aDE-HGF$$bKey Technologies$$lSupercomputing & Big Data $$vComputational Science and Mathematical Methods$$x0
000141420 9131_ $$0G:(DE-HGF)POF2-411$$1G:(DE-HGF)POF2-410$$2G:(DE-HGF)POF2-400$$3G:(DE-HGF)POF2$$4G:(DE-HGF)POF$$aDE-HGF$$bSchlüsseltechnologien$$lSupercomputing$$vComputational Science and Mathematical Methods$$x0
000141420 9141_ $$y2013
000141420 9201_ $$0I:(DE-Juel1)JSC-20090406$$kJSC$$lJülich Supercomputing Center$$x0
000141420 980__ $$acontrib
000141420 980__ $$aVDB
000141420 980__ $$aUNRESTRICTED
000141420 980__ $$acontb
000141420 980__ $$aI:(DE-Juel1)JSC-20090406