000860302 001__ 860302
000860302 005__ 20200914094354.0
000860302 0247_ $$2doi$$a10.1016/0010-4655(96)00089-6
000860302 0247_ $$2ISSN$$a0010-4655
000860302 0247_ $$2ISSN$$a1386-9485
000860302 0247_ $$2ISSN$$a1879-2944
000860302 037__ $$aFZJ-2019-01077
000860302 082__ $$a530
000860302 1001_ $$0P:(DE-HGF)0$$aFischer, S.$$b0
000860302 245__ $$aA parallel SSOR preconditioner for lattice QCD
000860302 260__ $$aAmsterdam$$bNorth Holland Publ. Co.$$c1996
000860302 3367_ $$2DRIVER$$aarticle
000860302 3367_ $$2DataCite$$aOutput Types/Journal article
000860302 3367_ $$0PUB:(DE-HGF)16$$2PUB:(DE-HGF)$$aJournal Article$$bjournal$$mjournal$$s1600069395_27503
000860302 3367_ $$2BibTeX$$aARTICLE
000860302 3367_ $$2ORCID$$aJOURNAL_ARTICLE
000860302 3367_ $$00$$2EndNote$$aJournal Article
000860302 520__ $$aWe present a parallelizable SSOR preconditioning scheme for Krylov subspace iterative solvers which proves to be efficient in lattice QCD applications involving Wilson fermions. Our preconditioner is based on a locally lexicographic ordering of the lattice points. In actual Hybrid Monte Carlo applications with the bi-conjugate gradient stabilized method BiCGstab, we achieve a gain factor of about 2 in the number of iterations compared to conventional odd-even preconditioning. Whether this translates into similar reductions in run time will depend on the parallel computer in use. We discuss implementation issues using the ‘Eisenstat-trick’ and machine specific advantages of the method for the APE100/Quadrics parallel computer. In a full QCD simulation on a 512-processor Quadrics QH4 we find a gain in cpu-time of a factor of 1.7 over odd-even preconditioning for a 24^3 × 40 lattice.
000860302 588__ $$aDataset connected to CrossRef
000860302 7001_ $$0P:(DE-HGF)0$$aFrommer, A.$$b1
000860302 7001_ $$0P:(DE-HGF)0$$aGlässner, U.$$b2
000860302 7001_ $$0P:(DE-Juel1)132179$$aLippert, Th.$$b3$$ufzj
000860302 7001_ $$0P:(DE-HGF)0$$aRitzenhöfer, G.$$b4
000860302 7001_ $$0P:(DE-HGF)0$$aSchilling, K.$$b5
000860302 773__ $$0PERI:(DE-600)1466511-6$$a10.1016/0010-4655(96)00089-6$$gVol. 98, no. 1-2, p. 20 - 34$$n1-2$$p20 - 34$$tComputer physics communications$$v98$$x0010-4655$$y1996
000860302 909CO $$ooai:juser.fz-juelich.de:860302$$pextern4vita
000860302 9101_ $$0I:(DE-HGF)0$$6P:(DE-HGF)0$$aExternal Institute$$b0$$kExtern
000860302 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)132179$$aForschungszentrum Jülich$$b3$$kFZJ
000860302 9101_ $$0I:(DE-HGF)0$$6P:(DE-Juel1)176696$$aExternal Institute$$b5$$kExtern
000860302 915__ $$0StatID:(DE-HGF)0420$$2StatID$$aNationallizenz
000860302 915__ $$0StatID:(DE-HGF)0100$$2StatID$$aJCR$$bCOMPUT PHYS COMMUN : 2017
000860302 915__ $$0StatID:(DE-HGF)0200$$2StatID$$aDBCoverage$$bSCOPUS
000860302 915__ $$0StatID:(DE-HGF)0300$$2StatID$$aDBCoverage$$bMedline
000860302 915__ $$0StatID:(DE-HGF)0310$$2StatID$$aDBCoverage$$bNCBI Molecular Biology Database
000860302 915__ $$0StatID:(DE-HGF)0600$$2StatID$$aDBCoverage$$bEbsco Academic Search
000860302 915__ $$0StatID:(DE-HGF)0030$$2StatID$$aPeer Review$$bASC
000860302 915__ $$0StatID:(DE-HGF)0199$$2StatID$$aDBCoverage$$bClarivate Analytics Master Journal List
000860302 915__ $$0StatID:(DE-HGF)0110$$2StatID$$aWoS$$bScience Citation Index
000860302 915__ $$0StatID:(DE-HGF)0150$$2StatID$$aDBCoverage$$bWeb of Science Core Collection
000860302 915__ $$0StatID:(DE-HGF)0111$$2StatID$$aWoS$$bScience Citation Index Expanded
000860302 915__ $$0StatID:(DE-HGF)1150$$2StatID$$aDBCoverage$$bCurrent Contents - Physical, Chemical and Earth Sciences
000860302 915__ $$0StatID:(DE-HGF)9900$$2StatID$$aIF < 5
000860302 9801_ $$aEXTERN4VITA
000860302 980__ $$ajournal
000860302 980__ $$aEDITORS
000860302 980__ $$aI:(DE-Juel1)JSC-20090406
000860302 980__ $$aI:(DE-Juel1)NIC-20090406