| Hauptseite > Publikationsdatenbank > Optimizing Edge AI Models on HPC Systems with the Edge in the Loop |
| Contribution to a conference proceedings/Contribution to a book | FZJ-2025-05015 |
; ; ;
2026
Springer Nature Switzerland
Cham
ISBN: 978-3-032-07611-3 (print), 978-3-032-07612-0 (electronic)
This record in other databases:
Please use a persistent id in citations: doi:10.1007/978-3-032-07612-0_12 doi:10.34734/FZJ-2025-05015
Abstract: Artificial Intelligence (AI) and Machine Learning (ML) models deployed on edge devices, e.g., for quality control in Additive Manufacturing (AM), are frequently small in size. Such models usually have to deliver highly accurate results within a short time frame. Methodsthat are commonly employed in literature start out with larger trained models and try to reduce their memory and latency footprint by structural pruning, knowledge distillation, or quantization. It is, however, also possible to leverage hardware-aware Neural Architecture Search (NAS), an approach that seeks to systematically explore the architecture space to find optimized configurations. In this study, a hardware-aware NAS workflow is introduced that couples an edge device located in Belgium with a powerful High-Performance Computing (HPC) system in Germany, to train possible architecture candidates as fast as possible while performing real-time latency measurements on the target hardware. The approach is verified on a use case in the AM domain, based on the open RAISE-LPBF dataset, achieving ≈ 8.8 times faster inference speed while simultaneously enhancing model quality by a factor of ≈ 1.35, compared to a human-designed baseline.
|
The record appears in these collections: |