000156014 001__ 156014
000156014 005__ 20210129214203.0
000156014 037__ $$aFZJ-2014-04926
000156014 041__ $$aEnglish
000156014 1001_ $$0P:(DE-Juel1)161351$$aHaarhoff, Daniel$$b0
000156014 1112_ $$aFire and Evacuation Modelling Technical Conference 2014$$cGaithersburg$$d2014-09-08 - 2014-09-10$$wUSA
000156014 245__ $$aPerformance Analysis and Shared Memory Parallelisation of FDS
000156014 260__ $$c2014
000156014 300__ $$a13
000156014 3367_ $$0PUB:(DE-HGF)8$$2PUB:(DE-HGF)$$aContribution to a conference proceedings$$bcontrib$$mcontrib$$s1412863281_26712
000156014 3367_ $$033$$2EndNote$$aConference Paper
000156014 3367_ $$2ORCID$$aCONFERENCE_PAPER
000156014 3367_ $$2DataCite$$aOutput Types/Conference Paper
000156014 3367_ $$2DRIVER$$aconferenceObject
000156014 3367_ $$2BibTeX$$aINPROCEEDINGS
000156014 500__ $$aonline publication, open access will be granted on 10 March 2015
000156014 520__ $$aFire simulation is a complex issue due to the large number of physical and chemical processes involved. The code of FDS covers many of these using various models and is extensively verified and validated, but lacks support for modern multicore hardware. This article documents the efforts of providing an Open Multi-Processing (OpenMP) parallelised version of the Fire Dynamics Simulator (FDS), version 6, that also permits hybrid use with the Message Passing Interface (MPI). As FDS does not allow for arbitrary domain decomposition to be used with MPI, the amount of computational resources is limited. An OpenMP parallelisation does not have these restrictions, but it is not able to use the resources as efficient as MPI does. Prior to parallelising the code, FDS was profiled using various measurement systems. To allow parallelisation the radiation solver as well as the tophat filter for LES equation where altered. The achieved parallelisation and speedup for various architectures and problem sizes were measured. A speedup of two is now attainable for common simulation cases on modern four-core processors and requires no additional setup by the user. Timings for various combinations of simultaneous usage of OpenMP and MPI are presented. Finally recommendations for further optimisation efforts are given.
000156014 536__ $$0G:(DE-HGF)POF2-411$$a411 - Computational Science and Mathematical Methods (POF2-411)$$cPOF2-411$$fPOF II$$x0
000156014 7001_ $$0P:(DE-Juel1)132044$$aArnold, Lukas$$b1$$eCorresponding Author$$ufzj
000156014 8564_ $$uhttp://www.thunderheadeng.com/2014/10/femtc2014_d2-a-2_arnold/
000156014 8564_ $$uhttps://juser.fz-juelich.de/record/156014/files/FZJ-2014-04926.pdf$$yRestricted
000156014 909CO $$ooai:juser.fz-juelich.de:156014$$pVDB
000156014 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)132044$$aForschungszentrum Jülich GmbH$$b1$$kFZJ
000156014 9132_ $$0G:(DE-HGF)POF3-511$$1G:(DE-HGF)POF3-510$$2G:(DE-HGF)POF3-500$$aDE-HGF$$bPOF III$$lKey Technologies$$vSupercomputing & Big Data $$x0
000156014 9131_ $$0G:(DE-HGF)POF2-411$$1G:(DE-HGF)POF2-410$$2G:(DE-HGF)POF2-400$$3G:(DE-HGF)POF2$$4G:(DE-HGF)POF$$aDE-HGF$$bSchlüsseltechnologien$$lSupercomputing$$vComputational Science and Mathematical Methods$$x0
000156014 9141_ $$y2014
000156014 920__ $$lyes
000156014 9201_ $$0I:(DE-Juel1)JSC-20090406$$kJSC$$lJülich Supercomputing Center$$x0
000156014 980__ $$acontrib
000156014 980__ $$aVDB
000156014 980__ $$aI:(DE-Juel1)JSC-20090406
000156014 980__ $$aUNRESTRICTED