TY - CONF AU - John, Chelsea Maria AU - Nassyr, Stepan AU - Herten, Andreas AU - Penke, Carolin TI - CARAML: Systematic Evaluation of AI Workloads on Accelerators M1 - FZJ-2024-06308 PY - 2024 AB - The rapid advancement of machine learning (ML) technologies has driven the development of specialized hardware accelerators designed to facilitate more efficient model training. This paper introduces the CARAML benchmark suite, which is employed to assess performance and energy consumption during the training of transformer-based large language models and computer vision models on a range of hardware accelerators, including systems from NVIDIA, AMD, and Graphcore. CARAML provides a compact, automated, extensible, and reproducible framework for assessing the performance and energy of ML workloads across various novel hardware architectures. The design and implementation of CARAML, along with a custom power measurement tool called jpwr, are discussed in detail. T2 - OpenGPT-X Forum CY - 5 Nov 2024 - 5 Nov 2024, Berlin (Germany) Y2 - 5 Nov 2024 - 5 Nov 2024 M2 - Berlin, Germany LB - PUB:(DE-HGF)24 DO - DOI:10.34734/FZJ-2024-06308 UR - https://juser.fz-juelich.de/record/1032519 ER -