001     280437
005     20210129221323.0
024 7 _ |a 10.4203/ccp.107.4
|2 doi
037 _ _ |a FZJ-2016-00214
100 1 _ |a Teijeiro, C.
|0 P:(DE-HGF)0
|b 0
111 2 _ |a The Fourth International Conference on Parallel, Distributed, Grid and Cloud Computing for Engineering
|c Dubrovnik
|d 2015-03-24 - 2015-03-27
|w Croatia
245 _ _ |a Parallel Bond Order Potentials for Materials Science Simulations
260 _ _ |a Stirlingshire, UK
|c 2015
|b Civil-Comp Press
300 _ _ |a Paper 4
336 7 _ |a Contribution to a conference proceedings
|b contrib
|m contrib
|0 PUB:(DE-HGF)8
|s 1452517029_899
|2 PUB:(DE-HGF)
336 7 _ |a Conference Paper
|0 33
|2 EndNote
336 7 _ |a CONFERENCE_PAPER
|2 ORCID
336 7 _ |a Output Types/Conference Paper
|2 DataCite
336 7 _ |a conferenceObject
|2 DRIVER
336 7 _ |a INPROCEEDINGS
|2 BibTeX
520 _ _ |a The computation of interatomic interactions in materials science is a challenging problem, because of the need for an accurate description of different bonding situations. Density functional theory (DFT) and tight binding (TB) provide good approximations to the problem but have high computational complexity, which limits the size of the systems to be studied. Analytic bond-order potentials (BOPs) provide a coarse-grained computation of interatomic interactions derived from DFT and TB in order to obtain satisfactory approximations, with an order-N increase in the simulation time as the system size grows. Even though BOPs are significantly less expensive than first principle methods, analytic BOPs require an efficient implementation in order to obtain good scalability for large systems.This paper presents a performance evaluation of a parallel implementation of a BOP code, with a description of the most time consuming tasks, and basic concepts for a parallelisation of the simulation. The main contributions of this paper are (1) the analysis of an optimized simulation code in terms of its different routines, (2) the implementation of parallel algorithms that take advantage of the nature of the simulation to obtain high scalability, (3) a performance evaluation of the parallel code on average-sized systems and the proposal of best practices for future developments, and (4) the example of integration of the routine for the precise computation of energies and forces in a molecular dynamics (MD) code.
536 _ _ |a 511 - Computational Science and Mathematical Methods (POF3-511)
|0 G:(DE-HGF)POF3-511
|c POF3-511
|f POF III
|x 0
588 _ _ |a Dataset connected to CrossRef Conference
700 1 _ |a Hammerschmidt, T.
|0 P:(DE-HGF)0
|b 1
700 1 _ |a Drautz, R.
|0 P:(DE-HGF)0
|b 2
700 1 _ |a Sutmann, G.
|0 P:(DE-Juel1)132274
|b 3
|u fzj
773 _ _ |a 10.4203/ccp.107.4
909 C O |o oai:juser.fz-juelich.de:280437
|p VDB
910 1 _ |a Forschungszentrum Jülich GmbH
|0 I:(DE-588b)5008462-8
|k FZJ
|b 3
|6 P:(DE-Juel1)132274
913 1 _ |a DE-HGF
|b Key Technologies
|1 G:(DE-HGF)POF3-510
|0 G:(DE-HGF)POF3-511
|2 G:(DE-HGF)POF3-500
|v Computational Science and Mathematical Methods
|x 0
|4 G:(DE-HGF)POF
|3 G:(DE-HGF)POF3
|l Supercomputing & Big Data
914 1 _ |y 2015
915 _ _ |a No Authors Fulltext
|0 StatID:(DE-HGF)0550
|2 StatID
920 1 _ |0 I:(DE-Juel1)JSC-20090406
|k JSC
|l Jülich Supercomputing Center
|x 0
980 _ _ |a contrib
980 _ _ |a VDB
980 _ _ |a UNRESTRICTED
980 _ _ |a I:(DE-Juel1)JSC-20090406


LibraryCollectionCLSMajorCLSMinorLanguageAuthor
Marc 21