Home > Publications database > CARAML: Systematic Evaluation of AI Workloads on Accelerators |
Poster (Other) | FZJ-2024-06308 |
; ; ;
2024
This record in other databases:
Please use a persistent id in citations: doi:10.34734/FZJ-2024-06308
Abstract: The rapid advancement of machine learning (ML) technologies has driven the development of specialized hardware accelerators designed to facilitate more efficient model training. This paper introduces the CARAML benchmark suite, which is employed to assess performance and energy consumption during the training of transformer-based large language models and computer vision models on a range of hardware accelerators, including systems from NVIDIA, AMD, and Graphcore. CARAML provides a compact, automated, extensible, and reproducible framework for assessing the performance and energy of ML workloads across various novel hardware architectures. The design and implementation of CARAML, along with a custom power measurement tool called jpwr, are discussed in detail.
![]() |
The record appears in these collections: |