%0 Conference Paper
%A Aach, Marcel
%A Blanc, Cyril
%A Lintermann, Andreas
%A De Grave, Kurt
%T Optimizing Edge AI Models on HPC Systems with the Edge in the Loop
%V 16091
%C Cham
%I Springer Nature Switzerland
%M FZJ-2025-05015
%@ 978-3-032-07611-3 (print)
%B Lecture Notes in Computer Science
%P 148 - 161
%D 2026
%< High Performance Computing
%X Artificial Intelligence (AI) and Machine Learning (ML) models deployed on edge devices, e.g., for quality control in Additive Manufacturing (AM), are frequently small in size. Such models usually have to deliver highly accurate results within a short time frame. Methodsthat are commonly employed in literature start out with larger trained models and try to reduce their memory and latency footprint by structural pruning, knowledge distillation, or quantization. It is, however, also possible to leverage hardware-aware Neural Architecture Search (NAS), an approach that seeks to systematically explore the architecture space to find optimized configurations. In this study, a hardware-aware NAS workflow is introduced that couples an edge device located in Belgium with a powerful High-Performance Computing (HPC) system in Germany, to train possible architecture candidates as fast as possible while performing real-time latency measurements on the target hardware. The approach is verified on a use case in the AM domain, based on the open RAISE-LPBF dataset, achieving ≈ 8.8 times faster inference speed while simultaneously enhancing model quality by a factor of ≈ 1.35, compared to a human-designed baseline.
%B ISC High Performance 2025
%C 10 Jun 2025 - 13 Jun 2025, Hamburg (Germany)
Y2 10 Jun 2025 - 13 Jun 2025
M2 Hamburg, Germany
%F PUB:(DE-HGF)8 ; PUB:(DE-HGF)7
%9 Contribution to a conference proceedingsContribution to a book
%R 10.1007/978-3-032-07612-0_12
%U https://juser.fz-juelich.de/record/1048916