%0 Conference Paper
%A Wybo, Willem
%A Tran, Viet Anh Khoa
%A Tsai, Matthias
%A Illing, Bernd
%A Jordan, Jakob
%A Senn, Walter
%A Morrison, Abigail
%T Dendritic modulation for multitask representation learning in deep feedforward networks
%M FZJ-2023-02506
%D 2023
%X Feedforward sensory processing in the brain is generally construed as proceeding through a hierar- chy of layers, each constructing increasingly abstract and invariant representations of sensory inputs. This interpretation is at odds with the observation that activity in sensory processing layers is heavily modulated by contextual signals, such as cross modal information or internal mental states [1]. While it is tempting to assume that such modulations bias the feedforward processing pathway towards de- tection of relevant input features given a context, this induces a dependence on the contextual state in hidden representations at any given layer. The next processing layer in the hierarchy thus has to be able to extract relevant information for each possible context. For this reason, most machine learning approaches to multitask learning apply task-specific output networks to context-independent representations of the inputs, generated by a shared trunk network.Here, we show that a network motif, where a layer of modulated hidden neurons targets an out- put neuron through task-independent feedforward weights, solves multitask learning problems, and that this network motif can be implemented with biophysically realistic neurons that receive context- modulating synaptic inputs on dendritic branches. The dendritic synapses in this motif evolve ac- cording to a Hebbian plasticity rule modulated by a global error signal. We then embed such a motif in each layer of a deep feedforward network, where it generates task-modulated representations of sensory inputs. To learn feedforward weights to the next layer in the network, we apply a contrastive learning objective that predicts whether representations originate either from different inputs, or from different task-modulations of the same input. This self-supervised approach results in deep represen- tation learning of feedforward weights that accommodate a multitude of contexts, without relying on error backpropagation between layers.
%B Cosyne 2023
%C 8 Mar 2023 - 16 Mar 2023, Montreal (Canada)
Y2 8 Mar 2023 - 16 Mar 2023
M2 Montreal, Canada
%F PUB:(DE-HGF)24
%9 Poster
%R 10.34734/FZJ-2023-02506
%U https://juser.fz-juelich.de/record/1008841