| Hauptseite > Publikationsdatenbank > QS4D: Quantization-aware training for efficient hardware deployment of structured state-space sequential models |
| Preprint | FZJ-2026-00222 |
; ; ; ; ;
2025
arXiv
This record in other databases:
Please use a persistent id in citations: doi:10.48550/ARXIV.2507.06079 doi:10.34734/FZJ-2026-00222
Abstract: Structured State Space models (SSM) have recently emerged as a new class of deep learning models, particularly well-suited for processing long sequences. Their constant memory footprint, in contrast to the linearly scaling memory demands of Transformers, makes them attractive candidates for deployment on resource-constrained edge-computing devices. While recent works have explored the effect of quantization-aware training (QAT) on SSMs, they typically do not address its implications for specialized edge hardware, for example, analog in-memory computing (AIMC) chips. In this work, we demonstrate that QAT can significantly reduce the complexity of SSMs by up to two orders of magnitude across various performance metrics. We analyze the relation between model size and numerical precision, and show that QAT enhances robustness to analog noise and enables structural pruning. Finally, we integrate these techniques to deploy SSMs on a memristive analog in-memory computing substrate and highlight the resulting benefits in terms of computational efficiency.
Keyword(s): Machine Learning (cs.LG) ; Artificial Intelligence (cs.AI) ; FOS: Computer and information sciences
|
The record appears in these collections: |