000891301 001__ 891301
000891301 005__ 20240313095002.0
000891301 037__ $$aFZJ-2021-01408
000891301 041__ $$aEnglish
000891301 1001_ $$0P:(DE-Juel1)176778$$aBouhadjar, Younes$$b0$$eCorresponding author
000891301 1112_ $$aNeuro-inspired Computational Elements Workshop$$cHeidelberg$$d2021-03-16 - 2021-03-19$$gNICE$$wGermany
000891301 245__ $$aSequence learning, prediction, and generation in networks of spiking neurons
000891301 260__ $$c2021
000891301 3367_ $$033$$2EndNote$$aConference Paper
000891301 3367_ $$2BibTeX$$aINPROCEEDINGS
000891301 3367_ $$2DRIVER$$aconferenceObject
000891301 3367_ $$2ORCID$$aCONFERENCE_POSTER
000891301 3367_ $$2DataCite$$aOutput Types/Conference Poster
000891301 3367_ $$0PUB:(DE-HGF)24$$2PUB:(DE-HGF)$$aPoster$$bposter$$mposter$$s1620804267_6937$$xAfter Call
000891301 502__ $$cRWTH Aachen
000891301 520__ $$aSequence learning, prediction and generation has been proposed to be the universal computation performed by the neocortex. The Hierarchical Temporal Memory (HTM) algorithm realizes this form of computation. It learns sequences in an unsupervised and continuous manner using local learning rules, permits a context-specific prediction of future sequence elements, and generates mismatch signals in case the predictions are not met. While the HTM algorithm accounts for a number of biological features such as topographic receptive fields, nonlinear dendritic processing, and sparse connectivity, it is based on abstract discrete-time neuron and synapse dynamics, as well as on plasticity mechanisms that can only partly be related to known biological mechanisms.Here, we devise a continuous-time implementation of the temporal-memory (TM) component of the HTM algorithm, which is based on a recurrent network of spiking neurons with biophysically interpretable variables and parameters. The model learns non-Markovian sequences by means of a structural Hebbian synaptic plasticity mechanism supplemented with a rate-based homeostatic control. In combination with nonlinear dendritic input integration and local inhibitory feedback, this type of plasticity leads to the dynamic self-organization of narrow sequence-specific feedforward subnetworks. These subnetworks provide the substrate for a faithful propagation of sparse, synchronous activity, and, thereby, for a robust, context-specific prediction of future sequence elements as well as for the autonomous replay of previously learned sequences.By strengthening the link to biology, our implementation facilitates the evaluation of the TM hypothesis based on experimentally accessible quantities. The continuous-time implementation of the TM algorithm permits, in particular, an investigation of the role of sequence timing for sequence learning, prediction and replay. We demonstrate this aspect by studying the effect of the sequence speed on the sequence learning performance and on the speed of autonomous sequence replay.
000891301 536__ $$0G:(DE-HGF)POF4-523$$a523 - Neuromorphic Computing and Network Dynamics (POF4-523)$$cPOF4-523$$fPOF IV$$x0
000891301 536__ $$0G:(DE-Juel1)aca_20190115$$aAdvanced Computing Architectures (aca_20190115)$$caca_20190115$$fAdvanced Computing Architectures$$x1
000891301 536__ $$0G:(DE-Juel1)PHD-NO-GRANT-20170405$$aPhD no Grant - Doktorand ohne besondere Förderung (PHD-NO-GRANT-20170405)$$cPHD-NO-GRANT-20170405$$x2
000891301 536__ $$0G:(EU-Grant)945539$$aHBP SGA3 - Human Brain Project Specific Grant Agreement 3 (945539)$$c945539$$x3
000891301 7001_ $$0P:(DE-Juel1)144174$$aDiesmann, Markus$$b1
000891301 7001_ $$0P:(DE-HGF)0$$aWouters, Dirk J.$$b2
000891301 7001_ $$0P:(DE-Juel1)145211$$aTetzlaff, Tom$$b3
000891301 909CO $$ooai:juser.fz-juelich.de:891301$$pec_fundedresources$$pVDB$$popenaire
000891301 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)176778$$aForschungszentrum Jülich$$b0$$kFZJ
000891301 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)144174$$aForschungszentrum Jülich$$b1$$kFZJ
000891301 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)145211$$aForschungszentrum Jülich$$b3$$kFZJ
000891301 9130_ $$0G:(DE-HGF)POF3-574$$1G:(DE-HGF)POF3-570$$2G:(DE-HGF)POF3-500$$3G:(DE-HGF)POF3$$4G:(DE-HGF)POF$$aDE-HGF$$bKey Technologies$$lDecoding the Human Brain$$vTheory, modelling and simulation$$x0
000891301 9131_ $$0G:(DE-HGF)POF4-523$$1G:(DE-HGF)POF4-520$$2G:(DE-HGF)POF4-500$$3G:(DE-HGF)POF4$$4G:(DE-HGF)POF$$aDE-HGF$$bKey Technologies$$lNatural, Artificial and Cognitive Information Processing$$vNeuromorphic Computing and Network Dynamics$$x0
000891301 9141_ $$y2021
000891301 920__ $$lyes
000891301 9201_ $$0I:(DE-Juel1)INM-6-20090406$$kINM-6$$lComputational and Systems Neuroscience$$x0
000891301 9201_ $$0I:(DE-Juel1)IAS-6-20130828$$kIAS-6$$lTheoretical Neuroscience$$x1
000891301 9201_ $$0I:(DE-Juel1)INM-10-20170113$$kINM-10$$lJara-Institut Brain structure-function relationships$$x2
000891301 9201_ $$0I:(DE-Juel1)PGI-7-20110106$$kPGI-7$$lElektronische Materialien$$x3
000891301 9201_ $$0I:(DE-Juel1)PGI-10-20170113$$kPGI-10$$lJARA Institut Green IT$$x4
000891301 980__ $$aposter
000891301 980__ $$aVDB
000891301 980__ $$aI:(DE-Juel1)INM-6-20090406
000891301 980__ $$aI:(DE-Juel1)IAS-6-20130828
000891301 980__ $$aI:(DE-Juel1)INM-10-20170113
000891301 980__ $$aI:(DE-Juel1)PGI-7-20110106
000891301 980__ $$aI:(DE-Juel1)PGI-10-20170113
000891301 980__ $$aUNRESTRICTED
000891301 981__ $$aI:(DE-Juel1)IAS-6-20130828