Conference Presentation (After Call) FZJ-2016-04275

http://join2-wiki.gsi.de/foswiki/pub/Main/Artwork/join2_logo100x88.png
The ChASE library on distributed and heterogeneous platforms



2016

Parallel Matrix Algorithms and Applications, PMAA 16, BordeauxBordeaux, France, 6 Jul 2016 - 8 Jul 20162016-07-062016-07-08

Abstract: We propose to step away from the black-box approach and allow the eigensolver to accept as much information as it is available from the application. Such a strategy implies that the resulting library is tailored to the specific application, or class of applications, and loose generality of usage. On the other hand, the resulting eigensolver maximally exploits knowledge from the application and become very efficient. With this general strategy in mind, we present here a version of a Chebyshev Accelerated Subspace iteration Eigensolver (ChASE) which targets extremal eigenpairs of dense eigenproblems. In particular, ChASE focuses of on a class of applications resulting in having to solve sequences of eigenvalue problems where adjacent problems possess a certain degree of correlation. A typical example of such applications is Density Functional Theory where the solution to a non-linear partial differential equation is worked out by generating and solving dozens of algebraic eigenvalue problems in a self- consistent fashion over dozens of iterations. Similarly, any non-linear eigenvalue problem, which can be solved by the method of successive linearization, gives rise to sequences of correlated algebraic eigenproblems that are the target of ChASE. We re-design the eigensolver so as to minimize its complexity and have better control of its numerical features. Following the algorithm optimizations, we strive to adopt a strategy leading to an implementation that would lends itself to high-performance parallel computing and avoid, at the same time, issues related to portability to heterogeneous architectures. We achieve such a goal by implementing parallel kernels for the modular tasks of the eigensolver using programming strategies out of MPI, OpenMP, and CUDA.


Contributing Institute(s):
  1. Jülich Supercomputing Center (JSC)

Appears in the scientific report 2016
Database coverage:
No Authors Fulltext
Click to display QR Code for this record

The record appears in these collections:
Document types > Presentations > Conference Presentations
Workflow collections > Public records
Institute Collections > JSC
Publications database

 Record created 2016-08-10, last modified 2022-11-09



Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)