Home > Publications database > A Truly Sparse and General Implementation ofGradient-Based Synaptic Plasticity |
Contribution to a conference proceedings | FZJ-2025-01175 |
; ; ;
2025
This record in other databases:
Please use a persistent id in citations: doi:10.34734/FZJ-2025-01175
Abstract: Online synaptic plasticity rules derived from gradi-ent descent achieve high accuracy on a wide range of practicaltasks. However, their software implementation often requirestediously hand-derived gradients or using gradient backprop-agation which sacrifices the online capability of the rules. Inthis work, we present a custom automatic differentiation (AD)pipeline for sparse and online implementation of gradient-based synaptic plasticity rules that generalizes to arbitraryneuron models. Our work combines the programming easeof backpropagation-type methods for forward AD while beingmemory-efficient. To achieve this, we exploit the advantageouscompute and memory scaling of online synaptic plasticity byproviding an inherently sparse implementation of AD whereexpensive tensor contractions are replaced with simple element-wise multiplications if the tensors are diagonal. Gradient-basedsynaptic plasticity rules such as eligibility propagation (e-prop)have exactly this property and thus profit immensely from thisfeature. We demonstrate the alignment of our gradients withrespect to gradient backpropagation on an synthetic task wheree-prop gradients are exact, as well as audio speech classificationbenchmarks. We demonstrate how memory utilization scales withnetwork size without dependence on the sequence length, asexpected from forward AD methods.
![]() |
The record appears in these collections: |