000279895 001__ 279895
000279895 005__ 20250314084113.0
000279895 037__ $$aFZJ-2015-07771
000279895 1001_ $$0P:(DE-HGF)0$$aKitayama, Itaru$$b0
000279895 1112_ $$aInternational Conference on Parallel Computing$$cEdinburgh$$d2015-09-01 - 2015-09-04$$gParCo$$wScotland
000279895 245__ $$aExecution Performance Analysis of the ABySS Genome Sequence Assembler using Scalasca on the K computer
000279895 260__ $$c2015
000279895 3367_ $$0PUB:(DE-HGF)6$$2PUB:(DE-HGF)$$aConference Presentation$$bconf$$mconf$$s1450443653_24783$$xAfter Call
000279895 3367_ $$033$$2EndNote$$aConference Paper
000279895 3367_ $$2DataCite$$aOther
000279895 3367_ $$2ORCID$$aLECTURE_SPEECH
000279895 3367_ $$2DRIVER$$aconferenceObject
000279895 3367_ $$2BibTeX$$aINPROCEEDINGS
000279895 500__ $$apdf darf NICHT open access sein
000279895 520__ $$aPerformance analysis of the ABySS genome sequence assembler (ABYSS-P) executing on the K computer with up to 8192 compute nodes is described which identified issues that limited scalability to less than 1024 compute nodes and required prohibitive message buffer memory with 16384 or more compute nodes. The open-source Scalasca toolset was employed to analyse executions, revealing the impact of massive amounts of MPI point-to-point communication used particularly for master/worker process coordination, and inefficient parallel file operations that manifest as waiting time at later MPI collective synchronisations and communications. Initial remediation via use of collective communication operations and alternate strategies for parallel file handling show large performance and scalability improvements, with partial executions validated on the full 82,944 compute nodes of the K computer.
000279895 536__ $$0G:(DE-HGF)POF3-511$$a511 - Computational Science and Mathematical Methods (POF3-511)$$cPOF3-511$$fPOF III$$x0
000279895 536__ $$0G:(DE-Juel-1)ATMLPP$$aATMLPP - ATML Parallel Performance (ATMLPP)$$cATMLPP$$x1
000279895 7001_ $$0P:(DE-Juel1)132302$$aWylie, Brian J. N.$$b1$$eCorresponding author$$ufzj
000279895 7001_ $$0P:(DE-HGF)0$$aMaeda, Toshiyuki$$b2
000279895 8564_ $$uhttps://juser.fz-juelich.de/record/279895/files/ParCo2015_wylie.pdf$$yRestricted
000279895 8564_ $$uhttps://juser.fz-juelich.de/record/279895/files/ParCo2015_wylie.gif?subformat=icon$$xicon$$yRestricted
000279895 8564_ $$uhttps://juser.fz-juelich.de/record/279895/files/ParCo2015_wylie.jpg?subformat=icon-1440$$xicon-1440$$yRestricted
000279895 8564_ $$uhttps://juser.fz-juelich.de/record/279895/files/ParCo2015_wylie.jpg?subformat=icon-180$$xicon-180$$yRestricted
000279895 8564_ $$uhttps://juser.fz-juelich.de/record/279895/files/ParCo2015_wylie.jpg?subformat=icon-640$$xicon-640$$yRestricted
000279895 8564_ $$uhttps://juser.fz-juelich.de/record/279895/files/ParCo2015_wylie.pdf?subformat=pdfa$$xpdfa$$yRestricted
000279895 909CO $$ooai:juser.fz-juelich.de:279895$$pVDB
000279895 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)132302$$aForschungszentrum Jülich GmbH$$b1$$kFZJ
000279895 9131_ $$0G:(DE-HGF)POF3-511$$1G:(DE-HGF)POF3-510$$2G:(DE-HGF)POF3-500$$3G:(DE-HGF)POF3$$4G:(DE-HGF)POF$$aDE-HGF$$bKey Technologies$$lSupercomputing & Big Data$$vComputational Science and Mathematical Methods$$x0
000279895 9141_ $$y2015
000279895 920__ $$lyes
000279895 9201_ $$0I:(DE-Juel1)JSC-20090406$$kJSC$$lJülich Supercomputing Center$$x0
000279895 980__ $$aconf
000279895 980__ $$aVDB
000279895 980__ $$aI:(DE-Juel1)JSC-20090406
000279895 980__ $$aUNRESTRICTED