000909968 001__ 909968
000909968 005__ 20240313094841.0
000909968 037__ $$aFZJ-2022-03558
000909968 041__ $$aEnglish
000909968 1001_ $$0P:(DE-Juel1)180150$$aFischer, Kirsten$$b0$$eCorresponding author
000909968 1112_ $$aBernstein conference$$cBerlin$$d2022-09-14 - 2022-09-16$$wGermany
000909968 245__ $$aStatistical decomposition of neural networks: Information transfer between correlation functions
000909968 260__ $$c2022
000909968 3367_ $$033$$2EndNote$$aConference Paper
000909968 3367_ $$2BibTeX$$aINPROCEEDINGS
000909968 3367_ $$2DRIVER$$aconferenceObject
000909968 3367_ $$2ORCID$$aCONFERENCE_POSTER
000909968 3367_ $$2DataCite$$aOutput Types/Conference Poster
000909968 3367_ $$0PUB:(DE-HGF)24$$2PUB:(DE-HGF)$$aPoster$$bposter$$mposter$$s1669810630_6279$$xAfter Call
000909968 520__ $$aUncovering principles of information processing in neural systems continues to be an active field of research. For the visual system it is well known that it processes signals in a hierarchical manner [1,2]. Commonly used models in machine learning that perform hierarchical computations are feed-forward networks. We here study deep feed-forward networks with the aim of deducing general functional aspects of such systems. These networks implement a mapping between probability distributions, where the probability distribution is iteratively transformed from layer to layer. We develop a formalism for expressing signal transformations in each layer as transfers of information between different orders of correlation functions (see Fig. (a)). We show that the processing within internal network layers is captured by correlations up to second order. In addition, we demonstrate how the input layer also extracts higher order correlations from the data. Thus, by presenting different correlation orders in the input, we identify key statistics in the data (see Fig. (b)-(d)). As a next step, we consider recurrent time-continuous networks, reminiscent of biological neuronal networks (NeuralODEs, [3]). We derive a Fokker-Planck equation describing the evolution of the probability distribution. This formulation allows us to study time-dependent information flow between different interaction terms. In summary, this work provides insights into functional principles of information processing in neural networks.References:[1] Hubel, D. H., & Wiesel, T. N. (1962). Receptive fields, binocular interaction and functional architecture in the cat's visual cortex. The Journal of physiology, 160(1), 106.[2] Zhuang, C., Yan, S., Nayebi, A., Schrimpf, M., Frank, M. C., DiCarlo, J. J., & Yamins, D. L. (2021). Unsupervised neural network models of the ventral visual stream. Proceedings of the National Academy of Sciences, 118(3), e2014196118.[3] Chen, R. T., Rubanova, Y., Bettencourt, J., & Duvenaud, D. K. (2018). Neural ordinary differential equations. Advances in neural information processing systems, 31.
000909968 536__ $$0G:(DE-HGF)POF4-5232$$a5232 - Computational Principles (POF4-523)$$cPOF4-523$$fPOF IV$$x0
000909968 536__ $$0G:(DE-HGF)POF4-5234$$a5234 - Emerging NC Architectures (POF4-523)$$cPOF4-523$$fPOF IV$$x1
000909968 536__ $$0G:(DE-Juel-1)BMBF-01IS19077A$$aRenormalizedFlows - Transparent Deep Learning with Renormalized Flows (BMBF-01IS19077A)$$cBMBF-01IS19077A$$x2
000909968 536__ $$0G:(DE-Juel1)HGF-SMHB-2014-2018$$aMSNN - Theory of multi-scale neuronal networks (HGF-SMHB-2014-2018)$$cHGF-SMHB-2014-2018$$fMSNN$$x3
000909968 536__ $$0G:(DE-HGF)SO-092$$aACA - Advanced Computing Architectures (SO-092)$$cSO-092$$x4
000909968 536__ $$0G:(DE-82)EXS-SF-neuroIC002$$aneuroIC002 - Recurrence and stochasticity for neuro-inspired computation (EXS-SF-neuroIC002)$$cEXS-SF-neuroIC002$$x5
000909968 536__ $$0G:(GEPRIS)368482240$$aGRK 2416: MultiSenses-MultiScales: Novel approaches to decipher neural processing in multisensory integration (368482240)$$c368482240$$x6
000909968 7001_ $$0P:(DE-Juel1)178936$$aRene, Alexandre$$b1
000909968 7001_ $$0P:(DE-Juel1)171384$$aKeup, Christian$$b2
000909968 7001_ $$0P:(DE-Juel1)174497$$aLayer, Moritz$$b3
000909968 7001_ $$0P:(DE-Juel1)156459$$aDahmen, David$$b4
000909968 7001_ $$0P:(DE-Juel1)144806$$aHelias, Moritz$$b5
000909968 909CO $$ooai:juser.fz-juelich.de:909968$$pVDB
000909968 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)180150$$aForschungszentrum Jülich$$b0$$kFZJ
000909968 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)178936$$aForschungszentrum Jülich$$b1$$kFZJ
000909968 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)171384$$aForschungszentrum Jülich$$b2$$kFZJ
000909968 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)174497$$aForschungszentrum Jülich$$b3$$kFZJ
000909968 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)156459$$aForschungszentrum Jülich$$b4$$kFZJ
000909968 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)144806$$aForschungszentrum Jülich$$b5$$kFZJ
000909968 9131_ $$0G:(DE-HGF)POF4-523$$1G:(DE-HGF)POF4-520$$2G:(DE-HGF)POF4-500$$3G:(DE-HGF)POF4$$4G:(DE-HGF)POF$$9G:(DE-HGF)POF4-5232$$aDE-HGF$$bKey Technologies$$lNatural, Artificial and Cognitive Information Processing$$vNeuromorphic Computing and Network Dynamics$$x0
000909968 9131_ $$0G:(DE-HGF)POF4-523$$1G:(DE-HGF)POF4-520$$2G:(DE-HGF)POF4-500$$3G:(DE-HGF)POF4$$4G:(DE-HGF)POF$$9G:(DE-HGF)POF4-5234$$aDE-HGF$$bKey Technologies$$lNatural, Artificial and Cognitive Information Processing$$vNeuromorphic Computing and Network Dynamics$$x1
000909968 9141_ $$y2022
000909968 920__ $$lyes
000909968 9201_ $$0I:(DE-Juel1)INM-6-20090406$$kINM-6$$lComputational and Systems Neuroscience$$x0
000909968 9201_ $$0I:(DE-Juel1)IAS-6-20130828$$kIAS-6$$lTheoretical Neuroscience$$x1
000909968 9201_ $$0I:(DE-Juel1)INM-10-20170113$$kINM-10$$lJara-Institut Brain structure-function relationships$$x2
000909968 980__ $$aposter
000909968 980__ $$aVDB
000909968 980__ $$aI:(DE-Juel1)INM-6-20090406
000909968 980__ $$aI:(DE-Juel1)IAS-6-20130828
000909968 980__ $$aI:(DE-Juel1)INM-10-20170113
000909968 980__ $$aUNRESTRICTED
000909968 981__ $$aI:(DE-Juel1)IAS-6-20130828