Home > Publications database > Dendritic modulation for multitask representation learning in deep feedforward networks |
Poster (After Call) | FZJ-2023-02506 |
; ; ; ; ; ;
2023
This record in other databases:
Please use a persistent id in citations: doi:10.34734/FZJ-2023-02506
Abstract: Feedforward sensory processing in the brain is generally construed as proceeding through a hierar- chy of layers, each constructing increasingly abstract and invariant representations of sensory inputs. This interpretation is at odds with the observation that activity in sensory processing layers is heavily modulated by contextual signals, such as cross modal information or internal mental states [1]. While it is tempting to assume that such modulations bias the feedforward processing pathway towards de- tection of relevant input features given a context, this induces a dependence on the contextual state in hidden representations at any given layer. The next processing layer in the hierarchy thus has to be able to extract relevant information for each possible context. For this reason, most machine learning approaches to multitask learning apply task-specific output networks to context-independent representations of the inputs, generated by a shared trunk network.Here, we show that a network motif, where a layer of modulated hidden neurons targets an out- put neuron through task-independent feedforward weights, solves multitask learning problems, and that this network motif can be implemented with biophysically realistic neurons that receive context- modulating synaptic inputs on dendritic branches. The dendritic synapses in this motif evolve ac- cording to a Hebbian plasticity rule modulated by a global error signal. We then embed such a motif in each layer of a deep feedforward network, where it generates task-modulated representations of sensory inputs. To learn feedforward weights to the next layer in the network, we apply a contrastive learning objective that predicts whether representations originate either from different inputs, or from different task-modulations of the same input. This self-supervised approach results in deep represen- tation learning of feedforward weights that accommodate a multitude of contexts, without relying on error backpropagation between layers.
![]() |
The record appears in these collections: |