%0 Electronic Article
%A Leroux, Nathan
%A Manea, Paul-Philipp
%A Sudarshan, Chirag
%A Finkbeiner, Jan
%A Siegel, Sebastian
%A Strachan, John Paul
%A Neftci, Emre
%T Analog In-Memory Computing Attention Mechanism for Fast and Energy-Efficient Large Language Models
%I arXiv
%M FZJ-2025-01113
%D 2024
%X Transformer networks, driven by self-attention, are central to Large Language Models. In generative Transformers, self-attention uses cache memory to store token projections, avoiding recomputation at each time step. However, GPU-stored projections must be loaded into SRAM for each new generation step, causing latency and energy bottlenecks. We present a custom self-attention in-memory computing architecture based on emerging charge-based memories called gain cells, which can be efficiently written to store new tokens during sequence generation and enable parallel analog dot-product computation required for self-attention. However, the analog gain cell circuits introduce non-idealities and constraints preventing the direct mapping of pre-trained models. To circumvent this problem, we design an initialization algorithm achieving text processing performance comparable to GPT-2 without training from scratch. Our architecture respectively reduces attention latency and energy consumption by up to two and five orders of magnitude compared to GPUs, marking a significant step toward ultra-fast, low-power generative Transformers.
%K Neural and Evolutionary Computing (cs.NE) (Other)
%K Artificial Intelligence (cs.AI) (Other)
%K Hardware Architecture (cs.AR) (Other)
%K Emerging Technologies (cs.ET) (Other)
%K FOS: Computer and information sciences (Other)
%F PUB:(DE-HGF)25
%9 Preprint
%R 10.48550/arXiv.2409.19315
%U https://juser.fz-juelich.de/record/1038064