TY  - CONF
AU  - Lohoff, Jamie
AU  - Kaya, Anil
AU  - Assmuth, Florian
AU  - Neftci, Emre
TI  - A Truly Sparse and General Implementation ofGradient-Based Synaptic Plasticity
M1  - FZJ-2025-01175
SP  - n/a
PY  - 2025
N1  - Accepted as an oral presentation.
AB  - Online synaptic plasticity rules derived from gradi-ent descent achieve high accuracy on a wide range of practicaltasks. However, their software implementation often requirestediously hand-derived gradients or using gradient backprop-agation which sacrifices the online capability of the rules. Inthis work, we present a custom automatic differentiation (AD)pipeline for sparse and online implementation of gradient-based synaptic plasticity rules that generalizes to arbitraryneuron models. Our work combines the programming easeof backpropagation-type methods for forward AD while beingmemory-efficient. To achieve this, we exploit the advantageouscompute and memory scaling of online synaptic plasticity byproviding an inherently sparse implementation of AD whereexpensive tensor contractions are replaced with simple element-wise multiplications if the tensors are diagonal. Gradient-basedsynaptic plasticity rules such as eligibility propagation (e-prop)have exactly this property and thus profit immensely from thisfeature. We demonstrate the alignment of our gradients withrespect to gradient backpropagation on an synthetic task wheree-prop gradients are exact, as well as audio speech classificationbenchmarks. We demonstrate how memory utilization scales withnetwork size without dependence on the sequence length, asexpected from forward AD methods.
T2  - Neuro-Inspired Computational Elements
CY  - 25 Mar 2025 - 28 Mar 2025, Heidelberg (Germany)
Y2  - 25 Mar 2025 - 28 Mar 2025
M2  - Heidelberg, Germany
LB  - PUB:(DE-HGF)8
DO  - DOI:10.34734/FZJ-2025-01175
UR  - https://juser.fz-juelich.de/record/1038128
ER  -