001     1055111
005     20260227202313.0
024 7 _ |a 10.5281/ZENODO.18770020
|2 doi
024 7 _ |a 10.34734/FZJ-2026-01868
|2 datacite_doi
037 _ _ |a FZJ-2026-01868
041 _ _ |a English
100 1 _ |a Aksoy, Alperen
|0 P:(DE-Juel1)194719
|b 0
|e Corresponding author
|u fzj
111 2 _ |a deRSE26 - 6th conference for Research Software Engineering & 1st Stuttgart Research Software Day
|g deRSE26 & SRSD1
|c Stuttgart
|d 2026-03-03 - 2026-03-05
|w Germany
245 _ _ |a Embedded Artificial Neural Networks for Energy-Restricted Edge-Computing Applications
260 _ _ |c 2026
336 7 _ |a Conference Paper
|0 33
|2 EndNote
336 7 _ |a INPROCEEDINGS
|2 BibTeX
336 7 _ |a conferenceObject
|2 DRIVER
336 7 _ |a CONFERENCE_POSTER
|2 ORCID
336 7 _ |a Output Types/Conference Poster
|2 DataCite
336 7 _ |a Poster
|b poster
|m poster
|0 PUB:(DE-HGF)24
|s 1772112443_17387
|2 PUB:(DE-HGF)
|x After Call
520 _ _ |a The development of energy-efficient and fast machine learning methods plays an increasingly important role in experimental physics, where data analysis and control tasks often need to operate under strict resource constraints. In these contexts, machine learning models can automate complex calibration and analysis tasks while enabling on-device data processing close to the experimental sensors.One representative application presented on this poster concerns the automated calibration of semiconductor spin qubits, while the outlook highlights extensions toward edge-computing approaches in detector systems.The automated calibration of quantum dots is a key prerequisite for realizing scalable quantum computers. In particular, the analysis of charge stability diagrams, used to detect charge transitions in quantum dots, represents a complex and time-consuming task. Neural networks, especially U-Net architectures, offer the potential to automate this process by reliably recognizing relevant patterns in simulated and experimental measurement data. State-of-the-art networks have already been successfully trained for this purpose.However, there remains significant potential for optimization to enable space- and energy-efficient integration close to the quantum bits within the cryostat.We have investigated the use of quantized neural networks for energy-efficient quantum dot calibration. The goal is to analyze the impact of post-training quantization and quantization-aware training on detection quality, as well as the general effects of quantization on memory requirements and inference speed. Three U-Nets with different architectures, parameter counts, and input dimensions serve as model bases, applied to simulated charge stability diagrams. The results show that appropriate quantization strategies can reduce memory usage without significantly affecting detection quality.The findings of this work contribute to the integration of energy-efficient machine learning methods into experimental quantum computing environments, thereby supporting overall scalability.Building on these results, the approach is extended toward the use of binarized neural networks (BNNs) to push energy efficiency and faster inference even further. Within edge computing applications, current efforts focus on implementing and demonstrating such networks on FPGA hardware, aiming to exploit binary-weight computation and hardware-level parallelism for minimal latency and power consumption. Beyond quantum dot calibration, these methods are also being investigated for other scientific applications, such as the autonomous self-triggering radio detection of extensive air showers, highlighting the broader potential of hardware-embedded AI for resource-constrained experimental environments.
536 _ _ |a 5234 - Emerging NC Architectures (POF4-523)
|0 G:(DE-HGF)POF4-5234
|c POF4-523
|f POF IV
|x 0
588 _ _ |a Dataset connected to DataCite
650 _ 7 |a Quantum Dot Calibration
|2 Other
650 _ 7 |a Energy-Efficient Machine Learning
|2 Other
650 _ 7 |a Quantized Neural Networks
|2 Other
650 _ 7 |a U-Net
|2 Other
650 _ 7 |a Edge Computing
|2 Other
650 _ 7 |a FPGA
|2 Other
650 _ 7 |a Binary Neural Networks
|2 Other
650 _ 7 |a Model Compression
|2 Other
650 _ 7 |a Real-Time Inference
|2 Other
700 1 _ |a Fleitmann, Sarah
|0 P:(DE-Juel1)173094
|b 1
|u fzj
700 1 _ |a Bekman, Ilja
|0 P:(DE-Juel1)171927
|b 2
|u fzj
700 1 _ |a Dorosti, Qader
|0 P:(DE-HGF)0
|b 3
700 1 _ |a Vogelbruch, Jan-Friedrich
|0 P:(DE-Juel1)133952
|b 4
|u fzj
700 1 _ |a Dimitrov, Vesselin
|0 P:(DE-HGF)0
|b 5
700 1 _ |a Hader, Fabian
|0 P:(DE-Juel1)170099
|b 6
|u fzj
700 1 _ |a van Waasen, Stefan
|0 P:(DE-Juel1)142562
|b 7
|u fzj
773 _ _ |a 10.5281/ZENODO.18770020
856 4 _ |u https://juser.fz-juelich.de/record/1055111/files/deRSE26-Poster.pdf
|y OpenAccess
909 C O |o oai:juser.fz-juelich.de:1055111
|p openaire
|p open_access
|p VDB
|p driver
910 1 _ |a Forschungszentrum Jülich
|0 I:(DE-588b)5008462-8
|k FZJ
|b 0
|6 P:(DE-Juel1)194719
910 1 _ |a Forschungszentrum Jülich
|0 I:(DE-588b)5008462-8
|k FZJ
|b 1
|6 P:(DE-Juel1)173094
910 1 _ |a Forschungszentrum Jülich
|0 I:(DE-588b)5008462-8
|k FZJ
|b 2
|6 P:(DE-Juel1)171927
910 1 _ |a University of Siegen
|0 I:(DE-HGF)0
|b 3
|6 P:(DE-HGF)0
910 1 _ |a Forschungszentrum Jülich
|0 I:(DE-588b)5008462-8
|k FZJ
|b 4
|6 P:(DE-Juel1)133952
910 1 _ |a University of Siegen
|0 I:(DE-HGF)0
|b 5
|6 P:(DE-HGF)0
910 1 _ |a Forschungszentrum Jülich
|0 I:(DE-588b)5008462-8
|k FZJ
|b 6
|6 P:(DE-Juel1)170099
910 1 _ |a Forschungszentrum Jülich
|0 I:(DE-588b)5008462-8
|k FZJ
|b 7
|6 P:(DE-Juel1)142562
913 1 _ |a DE-HGF
|b Key Technologies
|l Natural, Artificial and Cognitive Information Processing
|1 G:(DE-HGF)POF4-520
|0 G:(DE-HGF)POF4-523
|3 G:(DE-HGF)POF4
|2 G:(DE-HGF)POF4-500
|4 G:(DE-HGF)POF
|v Neuromorphic Computing and Network Dynamics
|9 G:(DE-HGF)POF4-5234
|x 0
914 1 _ |y 2026
915 _ _ |a OpenAccess
|0 StatID:(DE-HGF)0510
|2 StatID
920 1 _ |0 I:(DE-Juel1)PGI-4-20110106
|k PGI-4
|l Integrated Computing Architectures
|x 0
980 _ _ |a poster
980 _ _ |a VDB
980 _ _ |a UNRESTRICTED
980 _ _ |a I:(DE-Juel1)PGI-4-20110106
980 1 _ |a FullTexts


LibraryCollectionCLSMajorCLSMinorLanguageAuthor
Marc 21