001037903 001__ 1037903
001037903 005__ 20250203103256.0
001037903 0247_ $$2datacite_doi$$a10.34734/FZJ-2025-01041
001037903 037__ $$aFZJ-2025-01041
001037903 1001_ $$0P:(DE-Juel1)203192$$aBencheikh, Wadjih$$b0
001037903 245__ $$aOptimal Gradient Checkpointing for Sparse and Recurrent Architectures using Off-Chip Memory
001037903 260__ $$c2024
001037903 3367_ $$0PUB:(DE-HGF)25$$2PUB:(DE-HGF)$$aPreprint$$bpreprint$$mpreprint$$s1738234604_29870
001037903 3367_ $$2ORCID$$aWORKING_PAPER
001037903 3367_ $$028$$2EndNote$$aElectronic Article
001037903 3367_ $$2DRIVER$$apreprint
001037903 3367_ $$2BibTeX$$aARTICLE
001037903 3367_ $$2DataCite$$aOutput Types/Working Paper
001037903 520__ $$aRecurrent neural networks (RNNs) are valued for their computational efficiency and reduced memory requirements on tasks involving long sequence lengths but require high memory-processor bandwidth to train. Checkpointing techniques can reduce the memory requirements by only storing a subset of intermediate states, the checkpoints, but are still rarely used due to the computational overhead of the additional recomputation phase. This work addresses these challenges by introducing memory-efficient gradient checkpointing strategies tailored for the general class of sparse RNNs and Spiking Neural Networks (SNNs). SNNs are energy efficient alternatives to RNNs thanks to their local, event-driven operation and potential neuromorphic implementation. We use the Intelligence Processing Unit (IPU) as an exemplary platform for architectures with distributed local memory. We exploit its suitability for sparse and irregular workloads to scale SNN training on long sequence lengths. We find that Double Checkpointing emerges as the most effective method, optimizing the use of local memory resources while minimizing recomputation overhead. This approach reduces dependency on slower large-scale memory access, enabling training on sequences over 10 times longer or 4 times larger networks than previously feasible, with only marginal time overhead. The presented techniques demonstrate significant potential to enhance scalability and efficiency in training sparse and recurrent networks across diverse hardware platforms, and highlights the benefits of sparse activations for scalable recurrent neural network training.
001037903 536__ $$0G:(DE-HGF)POF4-5234$$a5234 - Emerging NC Architectures (POF4-523)$$cPOF4-523$$fPOF IV$$x0
001037903 7001_ $$0P:(DE-Juel1)190112$$aFinkbeiner, Jan$$b1$$ufzj
001037903 7001_ $$0P:(DE-Juel1)188273$$aNeftci, Emre$$b2$$ufzj
001037903 8564_ $$uhttps://doi.org/10.48550/arXiv.2412.11810
001037903 8564_ $$uhttps://juser.fz-juelich.de/record/1037903/files/arxiv_Optimal%20Gradient%20Checkpointing%20for%20Sparse%20and%20Recurrent%20Architectures%20using%20Off-Chip%20Memory.pdf$$yOpenAccess
001037903 909CO $$ooai:juser.fz-juelich.de:1037903$$pdnbdelivery$$pdriver$$pVDB$$popen_access$$popenaire
001037903 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)190112$$aForschungszentrum Jülich$$b1$$kFZJ
001037903 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)188273$$aForschungszentrum Jülich$$b2$$kFZJ
001037903 9131_ $$0G:(DE-HGF)POF4-523$$1G:(DE-HGF)POF4-520$$2G:(DE-HGF)POF4-500$$3G:(DE-HGF)POF4$$4G:(DE-HGF)POF$$9G:(DE-HGF)POF4-5234$$aDE-HGF$$bKey Technologies$$lNatural, Artificial and Cognitive Information Processing$$vNeuromorphic Computing and Network Dynamics$$x0
001037903 9141_ $$y2024
001037903 915__ $$0StatID:(DE-HGF)0510$$2StatID$$aOpenAccess
001037903 920__ $$lyes
001037903 9201_ $$0I:(DE-Juel1)PGI-15-20210701$$kPGI-15$$lNeuromorphic Software Eco System$$x0
001037903 9801_ $$aFullTexts
001037903 980__ $$apreprint
001037903 980__ $$aVDB
001037903 980__ $$aUNRESTRICTED
001037903 980__ $$aI:(DE-Juel1)PGI-15-20210701