000188179 001__ 188179
000188179 005__ 20210129215142.0
000188179 0247_ $$2doi$$a10.1016/S0167-8191(97)00005-7
000188179 0247_ $$2ISSN$$a0167-8191
000188179 0247_ $$2ISSN$$a1872-7336
000188179 0247_ $$2WOS$$aWOS:A1997XB80600007
000188179 037__ $$aFZJ-2015-01639
000188179 082__ $$a004
000188179 1001_ $$0P:(DE-HGF)0$$aBasermann, A.$$b0$$eCorresponding Author
000188179 245__ $$aPreconditioned CG methods for sparse matrices on massively parallel machines
000188179 260__ $$aAmsterdam [u.a.]$$bNorth-Holland, Elsevier Science$$c1997
000188179 3367_ $$0PUB:(DE-HGF)16$$2PUB:(DE-HGF)$$aJournal Article$$bjournal$$mjournal$$s1425018174_13886
000188179 3367_ $$2DataCite$$aOutput Types/Journal article
000188179 3367_ $$00$$2EndNote$$aJournal Article
000188179 3367_ $$2BibTeX$$aARTICLE
000188179 3367_ $$2ORCID$$aJOURNAL_ARTICLE
000188179 3367_ $$2DRIVER$$aarticle
000188179 520__ $$aConjugate gradient (CG) methods to solve sparse systems of linear equations play an important role in numerical methods for solving discretized partial differential equations. The large size and the condition of many technical or physical applications in this area result in the need for efficient parallelization and preconditioning techniques of the CG method, in particular on massively parallel machines. Here, the data distribution and the communication scheme for the sparse matrix operations of the preconditioned CG are based on the analysis of the indices of the non-zero elements. Polynomial preconditioning is shown to reduce global synchronizations considerably, and a fully local incomplete Cholesky preconditioner is presented. On a PARAGON XP/S 10 with 138 processors, the developed parallel methods outperform diagonally scaled CG markedly with respect to both scaling behavior and execution time for many matrices from real finite element applications.
000188179 536__ $$0G:(DE-HGF)POF2-899$$a899 - ohne Topic (POF2-899)$$cPOF2-899$$fPOF I$$x0
000188179 588__ $$aDataset connected to CrossRef, juser.fz-juelich.de
000188179 7001_ $$0P:(DE-HGF)0$$aReichel, B.$$b1
000188179 7001_ $$0P:(DE-HGF)0$$aSchelthoff, C.$$b2
000188179 773__ $$0PERI:(DE-600)1466340-5$$a10.1016/S0167-8191(97)00005-7$$gVol. 23, no. 3, p. 381 - 398$$n3$$p381 - 398$$tParallel computing$$v23$$x0167-8191$$y1997
000188179 8564_ $$uhttps://juser.fz-juelich.de/record/188179/files/FZJ-2015-01639.pdf$$yRestricted
000188179 909CO $$ooai:juser.fz-juelich.de:188179$$pVDB
000188179 9132_ $$0G:(DE-HGF)POF3-899$$1G:(DE-HGF)POF3-890$$2G:(DE-HGF)POF3-800$$aDE-HGF$$bForschungsbereich Materie$$lForschungsbereich Materie$$vohne Topic$$x0
000188179 9131_ $$0G:(DE-HGF)POF2-899$$1G:(DE-HGF)POF2-890$$2G:(DE-HGF)POF2-800$$3G:(DE-HGF)POF2$$4G:(DE-HGF)POF$$aDE-HGF$$bProgrammungebundene Forschung$$lohne Programm$$vohne Topic$$x0
000188179 915__ $$0StatID:(DE-HGF)0100$$2StatID$$aJCR
000188179 915__ $$0StatID:(DE-HGF)0111$$2StatID$$aWoS$$bScience Citation Index Expanded
000188179 915__ $$0StatID:(DE-HGF)0150$$2StatID$$aDBCoverage$$bWeb of Science Core Collection
000188179 915__ $$0StatID:(DE-HGF)0199$$2StatID$$aDBCoverage$$bThomson Reuters Master Journal List
000188179 915__ $$0StatID:(DE-HGF)0200$$2StatID$$aDBCoverage$$bSCOPUS
000188179 915__ $$0StatID:(DE-HGF)0300$$2StatID$$aDBCoverage$$bMedline
000188179 915__ $$0StatID:(DE-HGF)1160$$2StatID$$aDBCoverage$$bCurrent Contents - Engineering, Computing and Technology
000188179 915__ $$0StatID:(DE-HGF)9900$$2StatID$$aIF < 5
000188179 920__ $$lyes
000188179 9201_ $$0I:(DE-Juel1)JSC-20090406$$kJSC$$lJülich Supercomputing Center$$x0
000188179 980__ $$ajournal
000188179 980__ $$aVDB
000188179 980__ $$aI:(DE-Juel1)JSC-20090406
000188179 980__ $$aUNRESTRICTED