%0 Conference Paper
%A Lohoff, Jamie
%A Kaya, Anil
%A Assmuth, Florian
%A Neftci, Emre
%T A Truly Sparse and General Implementation ofGradient-Based Synaptic Plasticity
%M FZJ-2025-01175
%P n/a
%D 2025
%Z Accepted as an oral presentation.
%X Online synaptic plasticity rules derived from gradi-ent descent achieve high accuracy on a wide range of practicaltasks. However, their software implementation often requirestediously hand-derived gradients or using gradient backprop-agation which sacrifices the online capability of the rules. Inthis work, we present a custom automatic differentiation (AD)pipeline for sparse and online implementation of gradient-based synaptic plasticity rules that generalizes to arbitraryneuron models. Our work combines the programming easeof backpropagation-type methods for forward AD while beingmemory-efficient. To achieve this, we exploit the advantageouscompute and memory scaling of online synaptic plasticity byproviding an inherently sparse implementation of AD whereexpensive tensor contractions are replaced with simple element-wise multiplications if the tensors are diagonal. Gradient-basedsynaptic plasticity rules such as eligibility propagation (e-prop)have exactly this property and thus profit immensely from thisfeature. We demonstrate the alignment of our gradients withrespect to gradient backpropagation on an synthetic task wheree-prop gradients are exact, as well as audio speech classificationbenchmarks. We demonstrate how memory utilization scales withnetwork size without dependence on the sequence length, asexpected from forward AD methods.
%B Neuro-Inspired Computational Elements
%C 25 Mar 2025 - 28 Mar 2025, Heidelberg (Germany)
Y2 25 Mar 2025 - 28 Mar 2025
M2 Heidelberg, Germany
%F PUB:(DE-HGF)8
%9 Contribution to a conference proceedings
%R 10.34734/FZJ-2025-01175
%U https://juser.fz-juelich.de/record/1038128