001     1038046
005     20250203103306.0
024 7 _ |a 10.34734/FZJ-2025-01095
|2 datacite_doi
037 _ _ |a FZJ-2025-01095
100 1 _ |a Leroux, Nathan
|0 P:(DE-Juel1)194421
|b 0
|u fzj
245 _ _ |a Analog In-Memory Computing Attention Mechanism for Fast and Energy-Efficient Large Language Models
260 _ _ |c 2024
336 7 _ |a Preprint
|b preprint
|m preprint
|0 PUB:(DE-HGF)25
|s 1738246081_31383
|2 PUB:(DE-HGF)
336 7 _ |a WORKING_PAPER
|2 ORCID
336 7 _ |a Electronic Article
|0 28
|2 EndNote
336 7 _ |a preprint
|2 DRIVER
336 7 _ |a ARTICLE
|2 BibTeX
336 7 _ |a Output Types/Working Paper
|2 DataCite
520 _ _ |a Transformer neural networks, driven by self-attention mechanisms, are core components of foundational and Large Language Models. In generative transformers, self-attention uses cache memory to store token projections, avoiding recomputation at each time step. However, GPU-stored projections must be loaded into SRAM for each new generation step, causing latency and energy bottlenecks for long sequences. In this work, we propose a fast and energy-efficient hardware implementation of self-attention using analog in-memory computing based on gain cell memories. Volatile gain cell memories can be efficiently written to store new tokens during sequence generation, while performing analog signed weight multiplications to compute the dot-products required for self-attention. We implement Sliding Window Attention, which keeps memory of a finite set of past steps. A charge-to-pulse converter for array readout eliminates the need for analog-to-digital conversion between self-attention stages. Using a co-designed initialization algorithm to adapt pre-trained weights to gain cell non-idealities, we achieve NLP performance comparable to ChatGPT-2 with minimal training iterations, despite hardware constraints. Our end-to-end hardware design includes digital controls, estimating area, latency, and energy. The system reduces attention latency by up to two orders of magnitude and energy consumption by up to five orders compared to GPUs, marking a significant step toward ultra-fast, low-power sequence generation in Large Language Models.
536 _ _ |a 5234 - Emerging NC Architectures (POF4-523)
|0 G:(DE-HGF)POF4-5234
|c POF4-523
|f POF IV
|x 0
536 _ _ |a BMBF 16ME0404 - Verbundprojekt: Neuro-inspirierte Technologien der künstlichen Intelligenz für die Elektronik der Zukunft - NEUROTEC II - (BMBF-16ME0404)
|0 G:(DE-82)BMBF-16ME0404
|c BMBF-16ME0404
|x 1
536 _ _ |a BMBF 16ME0400 - Verbundprojekt: Neuro-inspirierte Technologien der künstlichen Intelligenz für die Elektronik der Zukunft - NEUROTEC II - (16ME0400)
|0 G:(BMBF)16ME0400
|c 16ME0400
|x 2
536 _ _ |a BMBF 03ZU1106CA - NeuroSys: Algorithm-Hardware Co-Design (Projekt C) - A (03ZU1106CA)
|0 G:(BMBF)03ZU1106CA
|c 03ZU1106CA
|x 3
536 _ _ |a BMBF 03ZU1106CB - NeuroSys: Algorithm-Hardware Co-Design (Projekt C) - B (BMBF-03ZU1106CB)
|0 G:(DE-Juel1)BMBF-03ZU1106CB
|c BMBF-03ZU1106CB
|x 4
588 _ _ |a Dataset connected to DataCite
700 1 _ |a Manea, Paul
|0 P:(DE-Juel1)192242
|b 1
|u fzj
700 1 _ |a Sudarshan, Chirag
|0 P:(DE-Juel1)198888
|b 2
|u fzj
700 1 _ |a Finkbeiner, Jan Robert
|0 P:(DE-Juel1)190112
|b 3
|u fzj
700 1 _ |a Siegel, Sebastian
|0 P:(DE-Juel1)174486
|b 4
|u fzj
700 1 _ |a Strachan, John Paul
|0 P:(DE-Juel1)188145
|b 5
|u fzj
700 1 _ |a Neftci, Emre
|0 P:(DE-Juel1)188273
|b 6
|u fzj
856 4 _ |u https://arxiv.org/pdf/2409.19315
856 4 _ |u https://juser.fz-juelich.de/record/1038046/files/2409.19315v2.pdf
|y OpenAccess
909 C O |o oai:juser.fz-juelich.de:1038046
|p openaire
|p open_access
|p VDB
|p driver
|p dnbdelivery
910 1 _ |a Forschungszentrum Jülich
|0 I:(DE-588b)5008462-8
|k FZJ
|b 0
|6 P:(DE-Juel1)194421
910 1 _ |a Forschungszentrum Jülich
|0 I:(DE-588b)5008462-8
|k FZJ
|b 1
|6 P:(DE-Juel1)192242
910 1 _ |a Forschungszentrum Jülich
|0 I:(DE-588b)5008462-8
|k FZJ
|b 2
|6 P:(DE-Juel1)198888
910 1 _ |a Forschungszentrum Jülich
|0 I:(DE-588b)5008462-8
|k FZJ
|b 3
|6 P:(DE-Juel1)190112
910 1 _ |a Forschungszentrum Jülich
|0 I:(DE-588b)5008462-8
|k FZJ
|b 4
|6 P:(DE-Juel1)174486
910 1 _ |a Forschungszentrum Jülich
|0 I:(DE-588b)5008462-8
|k FZJ
|b 5
|6 P:(DE-Juel1)188145
910 1 _ |a Forschungszentrum Jülich
|0 I:(DE-588b)5008462-8
|k FZJ
|b 6
|6 P:(DE-Juel1)188273
913 1 _ |a DE-HGF
|b Key Technologies
|l Natural, Artificial and Cognitive Information Processing
|1 G:(DE-HGF)POF4-520
|0 G:(DE-HGF)POF4-523
|3 G:(DE-HGF)POF4
|2 G:(DE-HGF)POF4-500
|4 G:(DE-HGF)POF
|v Neuromorphic Computing and Network Dynamics
|9 G:(DE-HGF)POF4-5234
|x 0
914 1 _ |y 2024
915 _ _ |a OpenAccess
|0 StatID:(DE-HGF)0510
|2 StatID
920 1 _ |0 I:(DE-Juel1)PGI-15-20210701
|k PGI-15
|l Neuromorphic Software Eco System
|x 0
920 1 _ |0 I:(DE-Juel1)PGI-14-20210412
|k PGI-14
|l Neuromorphic Compute Nodes
|x 1
980 1 _ |a FullTexts
980 _ _ |a preprint
980 _ _ |a VDB
980 _ _ |a UNRESTRICTED
980 _ _ |a I:(DE-Juel1)PGI-15-20210701
980 _ _ |a I:(DE-Juel1)PGI-14-20210412


LibraryCollectionCLSMajorCLSMinorLanguageAuthor
Marc 21