Home > Publications database > Finding new bio-plausible Learning Rules using Deep Reinforcement Learning |
Poster (After Call) | FZJ-2024-01020 |
;
2023
This record in other databases:
Please use a persistent id in citations: doi:10.34734/FZJ-2024-01020
Abstract: Gradient-based learning is still the best bet when training spiking neural networks on supervised tasks. Although backpropagation, the state-of-the-art in modern AI, is not bio-plausible, there exists a wide range of approximations with this property that achieve competitive performance, e.g. e-prop[1,2]. We propose a new framework called AlphaGrad that could find more such learning rules by systematically exploring the search space using Deep Reinforcement Learning and methods from Automatic Differentiation(AD).
![]() |
The record appears in these collections: |