001 | 1019553 | ||
005 | 20240102203541.0 | ||
037 | _ | _ | |a FZJ-2023-05493 |
100 | 1 | _ | |a Krajsek, Kai |0 P:(DE-Juel1)129347 |b 0 |e Corresponding author |u fzj |
111 | 2 | _ | |a ISC High Performance 2023 |c Hamburg |d 2023-05-21 - 2023-05-25 |w Germany |
245 | _ | _ | |a Adjoint MPI for Non-Additive Separable Loss Functions |
260 | _ | _ | |c 2023 |
336 | 7 | _ | |a Conference Paper |0 33 |2 EndNote |
336 | 7 | _ | |a INPROCEEDINGS |2 BibTeX |
336 | 7 | _ | |a conferenceObject |2 DRIVER |
336 | 7 | _ | |a CONFERENCE_POSTER |2 ORCID |
336 | 7 | _ | |a Output Types/Conference Poster |2 DataCite |
336 | 7 | _ | |a Poster |b poster |m poster |0 PUB:(DE-HGF)24 |s 1704204930_26654 |2 PUB:(DE-HGF) |x After Call |
520 | _ | _ | |a Most contemporary frameworks for parallelizing deep learning models follow a fixed design pattern that favours a specific parallelization paradigm. More flexible libraries, such as PyTorch.distributed provides communication primitives but lack the flexibility of the capabilities of the MPI standard. PyTorch.distributed currently supports automatic differentiation for point-to-point communication, but not for collective communication. In contrast, flexible and effective communication patterns are useful for effective non-additive separable loss functions encountered in self-supervised contrastive learning approaches. This poster explores the implementation of the adjoint MPI concept in distributed deep learning models. It begins with an introduction to the fundamental principles of adjoint modelling, followed by an adjoint MPI concept for distributed Deep Learning which enables flexible parallelization in conjunction with existing libraries. The poster also covers implementation details and usage of the approach in contrastive self-supervised learning. |
536 | _ | _ | |a 5111 - Domain-Specific Simulation & Data Life Cycle Labs (SDLs) and Research Groups (POF4-511) |0 G:(DE-HGF)POF4-5111 |c POF4-511 |f POF IV |x 0 |
536 | _ | _ | |a SLNS - SimLab Neuroscience (Helmholtz-SLNS) |0 G:(DE-Juel1)Helmholtz-SLNS |c Helmholtz-SLNS |x 1 |
909 | C | O | |o oai:juser.fz-juelich.de:1019553 |p VDB |
910 | 1 | _ | |a Forschungszentrum Jülich |0 I:(DE-588b)5008462-8 |k FZJ |b 0 |6 P:(DE-Juel1)129347 |
913 | 1 | _ | |a DE-HGF |b Key Technologies |l Engineering Digital Futures – Supercomputing, Data Management and Information Security for Knowledge and Action |1 G:(DE-HGF)POF4-510 |0 G:(DE-HGF)POF4-511 |3 G:(DE-HGF)POF4 |2 G:(DE-HGF)POF4-500 |4 G:(DE-HGF)POF |v Enabling Computational- & Data-Intensive Science and Engineering |9 G:(DE-HGF)POF4-5111 |x 0 |
914 | 1 | _ | |y 2023 |
920 | _ | _ | |l yes |
920 | 1 | _ | |0 I:(DE-Juel1)JSC-20090406 |k JSC |l Jülich Supercomputing Center |x 0 |
980 | _ | _ | |a poster |
980 | _ | _ | |a VDB |
980 | _ | _ | |a I:(DE-Juel1)JSC-20090406 |
980 | _ | _ | |a UNRESTRICTED |
Library | Collection | CLSMajor | CLSMinor | Language | Author |
---|