TY - EJOUR
AU - Penke, Carolin
AU - John, Chelsea Maria
AU - Ebert, Jan
AU - Kesselheim, Stefan
AU - Herten, Andreas
TI - Training LLMs on HPC Systems: Best Practices from the OpenGPT-X Project
IS - 2504.10013
PB - arXiv
M1 - FZJ-2025-05592
M1 - 2504.10013
PY - 2025
AB - The training of large language models (LLMs) requires substantial computational resources, complex software stacks, and carefully designed workflows to achieve scalability and efficiency. This report presents best practices and insights gained from the OpenGPT-X project, a German initiative focused on developing open, multilingual LLMs optimized for European languages. We detail the use of high-performance computing (HPC) systems, primarily JUWELS Booster at JSC, for training Teuken-7B, a 7-billion-parameter transformer model. The report covers system architecture, training infrastructure, software choices, profiling and benchmarking tools, as well as engineering and operational challenges.
KW - Distributed, Parallel, and Cluster Computing (cs.DC) (Other)
KW - FOS: Computer and information sciences (Other)
KW - C.4; I.2.11; I.2.7; K.6 (Other)
LB - PUB:(DE-HGF)25
DO - DOI:10.48550/ARXIV.2504.10013
UR - https://juser.fz-juelich.de/record/1049808
ER -