Poster (After Call) FZJ-2026-01868

http://join2-wiki.gsi.de/foswiki/pub/Main/Artwork/join2_logo100x88.png
Embedded Artificial Neural Networks for Energy-Restricted Edge-Computing Applications

 ;  ;  ;  ;  ;  ;  ;

2026

deRSE26 - 6th conference for Research Software Engineering & 1st Stuttgart Research Software Day, deRSE26 & SRSD1, StuttgartStuttgart, Germany, 3 Mar 2026 - 5 Mar 20262026-03-032026-03-05 [10.5281/ZENODO.18770020]

This record in other databases:  

Please use a persistent id in citations: doi:  doi:

Abstract: The development of energy-efficient and fast machine learning methods plays an increasingly important role in experimental physics, where data analysis and control tasks often need to operate under strict resource constraints. In these contexts, machine learning models can automate complex calibration and analysis tasks while enabling on-device data processing close to the experimental sensors.One representative application presented on this poster concerns the automated calibration of semiconductor spin qubits, while the outlook highlights extensions toward edge-computing approaches in detector systems.The automated calibration of quantum dots is a key prerequisite for realizing scalable quantum computers. In particular, the analysis of charge stability diagrams, used to detect charge transitions in quantum dots, represents a complex and time-consuming task. Neural networks, especially U-Net architectures, offer the potential to automate this process by reliably recognizing relevant patterns in simulated and experimental measurement data. State-of-the-art networks have already been successfully trained for this purpose.However, there remains significant potential for optimization to enable space- and energy-efficient integration close to the quantum bits within the cryostat.We have investigated the use of quantized neural networks for energy-efficient quantum dot calibration. The goal is to analyze the impact of post-training quantization and quantization-aware training on detection quality, as well as the general effects of quantization on memory requirements and inference speed. Three U-Nets with different architectures, parameter counts, and input dimensions serve as model bases, applied to simulated charge stability diagrams. The results show that appropriate quantization strategies can reduce memory usage without significantly affecting detection quality.The findings of this work contribute to the integration of energy-efficient machine learning methods into experimental quantum computing environments, thereby supporting overall scalability.Building on these results, the approach is extended toward the use of binarized neural networks (BNNs) to push energy efficiency and faster inference even further. Within edge computing applications, current efforts focus on implementing and demonstrating such networks on FPGA hardware, aiming to exploit binary-weight computation and hardware-level parallelism for minimal latency and power consumption. Beyond quantum dot calibration, these methods are also being investigated for other scientific applications, such as the autonomous self-triggering radio detection of extensive air showers, highlighting the broader potential of hardware-embedded AI for resource-constrained experimental environments.

Keyword(s): Quantum Dot Calibration ; Energy-Efficient Machine Learning ; Quantized Neural Networks ; U-Net ; Edge Computing ; FPGA ; Binary Neural Networks ; Model Compression ; Real-Time Inference


Contributing Institute(s):
  1. Integrated Computing Architectures (PGI-4)
Research Program(s):
  1. 5234 - Emerging NC Architectures (POF4-523) (POF4-523)

Appears in the scientific report 2026
Database coverage:
OpenAccess
Click to display QR Code for this record

The record appears in these collections:
Document types > Presentations > Poster
Institute Collections > PGI > PGI-4
Workflow collections > Public records
Publications database
Open Access

 Record created 2026-02-26, last modified 2026-02-27


OpenAccess:
Download fulltext PDF
Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)