Poster (After Call) FZJ-2023-02506

http://join2-wiki.gsi.de/foswiki/pub/Main/Artwork/join2_logo100x88.png
Dendritic modulation for multitask representation learning in deep feedforward networks

 ;  ;  ;  ;  ;  ;

2023

Cosyne 2023, MontrealMontreal, Canada, 8 Mar 2023 - 16 Mar 20232023-03-082023-03-16 [10.34734/FZJ-2023-02506]

This record in other databases:

Please use a persistent id in citations: doi:

Abstract: Feedforward sensory processing in the brain is generally construed as proceeding through a hierar- chy of layers, each constructing increasingly abstract and invariant representations of sensory inputs. This interpretation is at odds with the observation that activity in sensory processing layers is heavily modulated by contextual signals, such as cross modal information or internal mental states [1]. While it is tempting to assume that such modulations bias the feedforward processing pathway towards de- tection of relevant input features given a context, this induces a dependence on the contextual state in hidden representations at any given layer. The next processing layer in the hierarchy thus has to be able to extract relevant information for each possible context. For this reason, most machine learning approaches to multitask learning apply task-specific output networks to context-independent representations of the inputs, generated by a shared trunk network.Here, we show that a network motif, where a layer of modulated hidden neurons targets an out- put neuron through task-independent feedforward weights, solves multitask learning problems, and that this network motif can be implemented with biophysically realistic neurons that receive context- modulating synaptic inputs on dendritic branches. The dendritic synapses in this motif evolve ac- cording to a Hebbian plasticity rule modulated by a global error signal. We then embed such a motif in each layer of a deep feedforward network, where it generates task-modulated representations of sensory inputs. To learn feedforward weights to the next layer in the network, we apply a contrastive learning objective that predicts whether representations originate either from different inputs, or from different task-modulations of the same input. This self-supervised approach results in deep represen- tation learning of feedforward weights that accommodate a multitude of contexts, without relying on error backpropagation between layers.


Contributing Institute(s):
  1. Computational and Systems Neuroscience (INM-6)
  2. Theoretical Neuroscience (IAS-6)
  3. Jara-Institut Brain structure-function relationships (INM-10)
Research Program(s):
  1. 5232 - Computational Principles (POF4-523) (POF4-523)
  2. HBP SGA1 - Human Brain Project Specific Grant Agreement 1 (720270) (720270)
  3. HBP SGA2 - Human Brain Project Specific Grant Agreement 2 (785907) (785907)
  4. HBP SGA3 - Human Brain Project Specific Grant Agreement 3 (945539) (945539)
  5. SDS005 - Towards an integrated data science of complex natural systems (PF-JARA-SDS005) (PF-JARA-SDS005)
  6. neuroIC002 - Recurrence and stochasticity for neuro-inspired computation (EXS-SF-neuroIC002) (EXS-SF-neuroIC002)

Appears in the scientific report 2023
Database coverage:
OpenAccess
Click to display QR Code for this record

The record appears in these collections:
Institute Collections > INM > INM-10
Document types > Presentations > Poster
Institute Collections > IAS > IAS-6
Institute Collections > INM > INM-6
Workflow collections > Public records
Publications database
Open Access

 Record created 2023-06-29, last modified 2024-03-13


OpenAccess:
Download fulltext PDF
Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)