001038046 001__ 1038046
001038046 005__ 20250203103306.0
001038046 0247_ $$2datacite_doi$$a10.34734/FZJ-2025-01095
001038046 037__ $$aFZJ-2025-01095
001038046 1001_ $$0P:(DE-Juel1)194421$$aLeroux, Nathan$$b0$$ufzj
001038046 245__ $$aAnalog In-Memory Computing Attention Mechanism for Fast and Energy-Efficient Large Language Models
001038046 260__ $$c2024
001038046 3367_ $$0PUB:(DE-HGF)25$$2PUB:(DE-HGF)$$aPreprint$$bpreprint$$mpreprint$$s1738246081_31383
001038046 3367_ $$2ORCID$$aWORKING_PAPER
001038046 3367_ $$028$$2EndNote$$aElectronic Article
001038046 3367_ $$2DRIVER$$apreprint
001038046 3367_ $$2BibTeX$$aARTICLE
001038046 3367_ $$2DataCite$$aOutput Types/Working Paper
001038046 520__ $$aTransformer neural networks, driven by self-attention mechanisms, are core components of foundational and Large Language Models. In generative transformers, self-attention uses cache memory to store token projections, avoiding recomputation at each time step. However, GPU-stored projections must be loaded into SRAM for each new generation step, causing latency and energy bottlenecks for long sequences. In this work, we propose a fast and energy-efficient hardware implementation of self-attention using analog in-memory computing based on gain cell memories. Volatile gain cell memories can be efficiently written to store new tokens during sequence generation, while performing analog signed weight multiplications to compute the dot-products required for self-attention. We implement Sliding Window Attention, which keeps memory of a finite set of past steps. A charge-to-pulse converter for array readout eliminates the need for analog-to-digital conversion between self-attention stages. Using a co-designed initialization algorithm to adapt pre-trained weights to gain cell non-idealities, we achieve NLP performance comparable to ChatGPT-2 with minimal training iterations, despite hardware constraints. Our end-to-end hardware design includes digital controls, estimating area, latency, and energy. The system reduces attention latency by up to two orders of magnitude and energy consumption by up to five orders compared to GPUs, marking a significant step toward ultra-fast, low-power sequence generation in Large Language Models.
001038046 536__ $$0G:(DE-HGF)POF4-5234$$a5234 - Emerging NC Architectures (POF4-523)$$cPOF4-523$$fPOF IV$$x0
001038046 536__ $$0G:(DE-82)BMBF-16ME0404$$aBMBF 16ME0404 - Verbundprojekt: Neuro-inspirierte Technologien der künstlichen Intelligenz für die Elektronik der Zukunft - NEUROTEC II - (BMBF-16ME0404)$$cBMBF-16ME0404$$x1
001038046 536__ $$0G:(BMBF)16ME0400$$aBMBF 16ME0400 - Verbundprojekt: Neuro-inspirierte Technologien der künstlichen Intelligenz für die Elektronik der Zukunft - NEUROTEC II - (16ME0400)$$c16ME0400$$x2
001038046 536__ $$0G:(BMBF)03ZU1106CA$$aBMBF 03ZU1106CA - NeuroSys: Algorithm-Hardware Co-Design (Projekt C) - A (03ZU1106CA)$$c03ZU1106CA$$x3
001038046 536__ $$0G:(DE-Juel1)BMBF-03ZU1106CB$$aBMBF 03ZU1106CB - NeuroSys: Algorithm-Hardware Co-Design (Projekt C) - B (BMBF-03ZU1106CB)$$cBMBF-03ZU1106CB$$x4
001038046 588__ $$aDataset connected to DataCite
001038046 7001_ $$0P:(DE-Juel1)192242$$aManea, Paul$$b1$$ufzj
001038046 7001_ $$0P:(DE-Juel1)198888$$aSudarshan, Chirag$$b2$$ufzj
001038046 7001_ $$0P:(DE-Juel1)190112$$aFinkbeiner, Jan Robert$$b3$$ufzj
001038046 7001_ $$0P:(DE-Juel1)174486$$aSiegel, Sebastian$$b4$$ufzj
001038046 7001_ $$0P:(DE-Juel1)188145$$aStrachan, John Paul$$b5$$ufzj
001038046 7001_ $$0P:(DE-Juel1)188273$$aNeftci, Emre$$b6$$ufzj
001038046 8564_ $$uhttps://arxiv.org/pdf/2409.19315
001038046 8564_ $$uhttps://juser.fz-juelich.de/record/1038046/files/2409.19315v2.pdf$$yOpenAccess
001038046 909CO $$ooai:juser.fz-juelich.de:1038046$$pdnbdelivery$$pdriver$$pVDB$$popen_access$$popenaire
001038046 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)194421$$aForschungszentrum Jülich$$b0$$kFZJ
001038046 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)192242$$aForschungszentrum Jülich$$b1$$kFZJ
001038046 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)198888$$aForschungszentrum Jülich$$b2$$kFZJ
001038046 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)190112$$aForschungszentrum Jülich$$b3$$kFZJ
001038046 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)174486$$aForschungszentrum Jülich$$b4$$kFZJ
001038046 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)188145$$aForschungszentrum Jülich$$b5$$kFZJ
001038046 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)188273$$aForschungszentrum Jülich$$b6$$kFZJ
001038046 9131_ $$0G:(DE-HGF)POF4-523$$1G:(DE-HGF)POF4-520$$2G:(DE-HGF)POF4-500$$3G:(DE-HGF)POF4$$4G:(DE-HGF)POF$$9G:(DE-HGF)POF4-5234$$aDE-HGF$$bKey Technologies$$lNatural, Artificial and Cognitive Information Processing$$vNeuromorphic Computing and Network Dynamics$$x0
001038046 9141_ $$y2024
001038046 915__ $$0StatID:(DE-HGF)0510$$2StatID$$aOpenAccess
001038046 9201_ $$0I:(DE-Juel1)PGI-15-20210701$$kPGI-15$$lNeuromorphic Software Eco System$$x0
001038046 9201_ $$0I:(DE-Juel1)PGI-14-20210412$$kPGI-14$$lNeuromorphic Compute Nodes$$x1
001038046 9801_ $$aFullTexts
001038046 980__ $$apreprint
001038046 980__ $$aVDB
001038046 980__ $$aUNRESTRICTED
001038046 980__ $$aI:(DE-Juel1)PGI-15-20210701
001038046 980__ $$aI:(DE-Juel1)PGI-14-20210412