Preprint FZJ-2025-05592

http://join2-wiki.gsi.de/foswiki/pub/Main/Artwork/join2_logo100x88.png
Training LLMs on HPC Systems: Best Practices from the OpenGPT-X Project

 ;  ;  ;  ;

2025
arXiv

arXiv () [10.48550/ARXIV.2504.10013]

This record in other databases:  

Please use a persistent id in citations: doi:

Report No.: 2504.10013

Abstract: The training of large language models (LLMs) requires substantial computational resources, complex software stacks, and carefully designed workflows to achieve scalability and efficiency. This report presents best practices and insights gained from the OpenGPT-X project, a German initiative focused on developing open, multilingual LLMs optimized for European languages. We detail the use of high-performance computing (HPC) systems, primarily JUWELS Booster at JSC, for training Teuken-7B, a 7-billion-parameter transformer model. The report covers system architecture, training infrastructure, software choices, profiling and benchmarking tools, as well as engineering and operational challenges.

Keyword(s): Distributed, Parallel, and Cluster Computing (cs.DC) ; FOS: Computer and information sciences ; C.4; I.2.11; I.2.7; K.6


Research Program(s):
  1. 5122 - Future Computing & Big Data Systems (POF4-512) (POF4-512)
  2. 5112 - Cross-Domain Algorithms, Tools, Methods Labs (ATMLs) and Research Groups (POF4-511) (POF4-511)
  3. ATML-X-DEV - ATML Accelerating Devices (ATML-X-DEV) (ATML-X-DEV)
  4. OpenGPT-X - Aufbau eines Gaia-X Knotens für große KI-Sprachmodelle und innovative Sprachapplikations-Services; Teilvorhaben: Optimierung und Skalierung auf großen HPC-Systemen (68GX21007F) (68GX21007F)

Click to display QR Code for this record

The record appears in these collections:
External Publications > Vita Publications
Institute Collections > JSC

 Record created 2025-12-17, last modified 2026-01-04


Restricted:
Download fulltext PDF
Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)