TY  - CONF
AU  - Aach, Marcel
AU  - Blanc, Cyril
AU  - Lintermann, Andreas
AU  - De Grave, Kurt
TI  - Optimizing Edge AI Models on HPC Systems with the Edge in the Loop
VL  - 16091
CY  - Cham
PB  - Springer Nature Switzerland
M1  - FZJ-2025-05015
SN  - 978-3-032-07611-3 (print)
T2  - Lecture Notes in Computer Science
SP  - 148 - 161
PY  - 2026
AB  - Artificial Intelligence (AI) and Machine Learning (ML) models deployed on edge devices, e.g., for quality control in Additive Manufacturing (AM), are frequently small in size. Such models usually have to deliver highly accurate results within a short time frame. Methodsthat are commonly employed in literature start out with larger trained models and try to reduce their memory and latency footprint by structural pruning, knowledge distillation, or quantization. It is, however, also possible to leverage hardware-aware Neural Architecture Search (NAS), an approach that seeks to systematically explore the architecture space to find optimized configurations. In this study, a hardware-aware NAS workflow is introduced that couples an edge device located in Belgium with a powerful High-Performance Computing (HPC) system in Germany, to train possible architecture candidates as fast as possible while performing real-time latency measurements on the target hardware. The approach is verified on a use case in the AM domain, based on the open RAISE-LPBF dataset, achieving ≈ 8.8 times faster inference speed while simultaneously enhancing model quality by a factor of ≈ 1.35, compared to a human-designed baseline.
T2  - ISC High Performance 2025
CY  - 10 Jun 2025 - 13 Jun 2025, Hamburg (Germany)
Y2  - 10 Jun 2025 - 13 Jun 2025
M2  - Hamburg, Germany
LB  - PUB:(DE-HGF)8 ; PUB:(DE-HGF)7
DO  - DOI:10.1007/978-3-032-07612-0_12
UR  - https://juser.fz-juelich.de/record/1048916
ER  -