001025142 001__ 1025142
001025142 005__ 20241128210253.0
001025142 0247_ $$2datacite_doi$$a10.34734/FZJ-2024-02719
001025142 037__ $$aFZJ-2024-02719
001025142 041__ $$aEnglish
001025142 1001_ $$0P:(DE-Juel1)192408$$aTran, Viet Anh Khoa$$b0$$eCorresponding author
001025142 1112_ $$aComputational and Systems Neuroscience 2024$$cLisbon$$d2024-02-29 - 2024-03-03$$gCOSYNE 2024$$wPortugal
001025142 245__ $$aContinual learning using dendritic modulations on view-invariant feedforward weights
001025142 260__ $$c2024
001025142 3367_ $$033$$2EndNote$$aConference Paper
001025142 3367_ $$2BibTeX$$aINPROCEEDINGS
001025142 3367_ $$2DRIVER$$aconferenceObject
001025142 3367_ $$2ORCID$$aCONFERENCE_POSTER
001025142 3367_ $$2DataCite$$aOutput Types/Conference Poster
001025142 3367_ $$0PUB:(DE-HGF)24$$2PUB:(DE-HGF)$$aPoster$$bposter$$mposter$$s1732775529_9614$$xAfter Call
001025142 520__ $$aThe brain is remarkably adept at learning from a continuous stream of data without significantlyforgetting previously learnt skills. Conventional machine learning models struggle at continual learn-ing, as weight updates that optimize the current task interfere with previously learnt tasks. A simpleremedy to catastrophic forgetting is freezing a network pretrained on a set of base tasks, and trainingtask-specific readouts on this shared trunk. However, this assumes that representations in the frozennetwork are separable under new tasks, therefore leading to sub-par performance. To continually learnon novel task data, previous methods suggest weight consolidation – preserving weights that are mostimpactful for the performance of previous tasks – and memory-based approaches – where the networkis allowed to see a subset of images from previous tasks.For biological networks, prior work showed that dendritic top-down modulations provide a powerfulmechanism to learn novel tasks while initial feedforward weights solely extract generic view-invariantfeatures. Therefore, we propose a continual learner that optimizes the feedforward weights towardsview-invariant representations while training task-specific modulations towards separable class clus-ters. In a task-incremental setting, we train feedforward weights using a self-supervised algorithm,while training the task-specific modulations and readouts in a supervised fashion, both exclusivelythrough current-task data. We show that this simple approach avoids catastrophic forgetting of classclusters, as opposed to training the whole network in a supervised manner, while also outperforming(a) task-specific readout without modulations and (b) frozen feedforward weights. This suggests that(a) top-down modulations are necessary and sufficient to shift the representations towards separableclusters and that (b) the SSL objective learns novel features based on the newly presented objectswhile maintaining features relevant to previous tasks, without requiring specific synaptic consolidationmechanisms.
001025142 536__ $$0G:(DE-HGF)POF4-5234$$a5234 - Emerging NC Architectures (POF4-523)$$cPOF4-523$$fPOF IV$$x0
001025142 536__ $$0G:(DE-Juel1)jinm60_20190501$$aFunctional Neural Architectures (jinm60_20190501)$$cjinm60_20190501$$fFunctional Neural Architectures$$x1
001025142 536__ $$0G:(BMBF)01IS22094B$$aWestAI - AI Service Center West (01IS22094B)$$c01IS22094B$$x2
001025142 7001_ $$0P:(DE-Juel1)188273$$aNeftci, Emre$$b1
001025142 7001_ $$0P:(DE-Juel1)186881$$aWybo, Willem$$b2
001025142 8564_ $$uhttps://juser.fz-juelich.de/record/1025142/files/Cosyne2024Abstract.pdf$$yOpenAccess
001025142 8564_ $$uhttps://juser.fz-juelich.de/record/1025142/files/Cosyne2024Abstract.gif?subformat=icon$$xicon$$yOpenAccess
001025142 8564_ $$uhttps://juser.fz-juelich.de/record/1025142/files/Cosyne2024Abstract.jpg?subformat=icon-1440$$xicon-1440$$yOpenAccess
001025142 8564_ $$uhttps://juser.fz-juelich.de/record/1025142/files/Cosyne2024Abstract.jpg?subformat=icon-180$$xicon-180$$yOpenAccess
001025142 8564_ $$uhttps://juser.fz-juelich.de/record/1025142/files/Cosyne2024Abstract.jpg?subformat=icon-640$$xicon-640$$yOpenAccess
001025142 909CO $$ooai:juser.fz-juelich.de:1025142$$popenaire$$popen_access$$pVDB$$pdriver
001025142 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)192408$$aForschungszentrum Jülich$$b0$$kFZJ
001025142 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)188273$$aForschungszentrum Jülich$$b1$$kFZJ
001025142 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)186881$$aForschungszentrum Jülich$$b2$$kFZJ
001025142 9131_ $$0G:(DE-HGF)POF4-523$$1G:(DE-HGF)POF4-520$$2G:(DE-HGF)POF4-500$$3G:(DE-HGF)POF4$$4G:(DE-HGF)POF$$9G:(DE-HGF)POF4-5234$$aDE-HGF$$bKey Technologies$$lNatural, Artificial and Cognitive Information Processing$$vNeuromorphic Computing and Network Dynamics$$x0
001025142 9141_ $$y2024
001025142 915__ $$0StatID:(DE-HGF)0510$$2StatID$$aOpenAccess
001025142 920__ $$lyes
001025142 9201_ $$0I:(DE-Juel1)PGI-15-20210701$$kPGI-15$$lNeuromorphic Software Eco System$$x0
001025142 980__ $$aposter
001025142 980__ $$aVDB
001025142 980__ $$aUNRESTRICTED
001025142 980__ $$aI:(DE-Juel1)PGI-15-20210701
001025142 9801_ $$aFullTexts