001052373 001__ 1052373
001052373 005__ 20260126203622.0
001052373 037__ $$aFZJ-2026-00967
001052373 1001_ $$0P:(DE-Juel1)185990$$aLindner, Javed$$b0$$eCorresponding author$$ufzj
001052373 1112_ $$aDPG Spring Meeting of the Condensed Matter Section$$cRegensburg$$d2025-03-16 - 2025-03-21$$wGermany
001052373 245__ $$aFeature learning in deep neural networks close to criticality
001052373 260__ $$c2025
001052373 3367_ $$033$$2EndNote$$aConference Paper
001052373 3367_ $$2DataCite$$aOther
001052373 3367_ $$2BibTeX$$aINPROCEEDINGS
001052373 3367_ $$2DRIVER$$aconferenceObject
001052373 3367_ $$2ORCID$$aLECTURE_SPEECH
001052373 3367_ $$0PUB:(DE-HGF)6$$2PUB:(DE-HGF)$$aConference Presentation$$bconf$$mconf$$s1769428101_13215$$xAfter Call
001052373 520__ $$aNeural networks excel due to their ability to learn features, yet its theoretical understanding continues to be a field of ongoing research. We develop a finite-width theory for deep non-linear networks, showing that their Bayesian prior is a superposition of Gaussian processes with kernel variances inversely proportional to the network width. In the proportional limit where both network width and training samples scale as N,P→∞ with P/N fixed, we derive forward-backward equations for the maximum a posteriori kernels, demonstrating how layer representations align with targets across network layers. A field-theoretic approach links finite-width corrections of the network kernels to fluctuations of the prior, bridging classical edge-of-chaos theory with feature learning and revealing key interactions between criticality, response, and network scales.
001052373 536__ $$0G:(DE-HGF)POF4-5232$$a5232 - Computational Principles (POF4-523)$$cPOF4-523$$fPOF IV$$x0
001052373 536__ $$0G:(DE-HGF)POF4-5234$$a5234 - Emerging NC Architectures (POF4-523)$$cPOF4-523$$fPOF IV$$x1
001052373 536__ $$0G:(DE-Juel1)HGF-SMHB-2014-2018$$aMSNN - Theory of multi-scale neuronal networks (HGF-SMHB-2014-2018)$$cHGF-SMHB-2014-2018$$fMSNN$$x2
001052373 536__ $$0G:(DE-HGF)SO-092$$aACA - Advanced Computing Architectures (SO-092)$$cSO-092$$x3
001052373 536__ $$0G:(GEPRIS)368482240$$aGRK 2416 - GRK 2416: MultiSenses-MultiScales: Neue Ansätze zur Aufklärung neuronaler multisensorischer Integration (368482240)$$c368482240$$x4
001052373 7001_ $$0P:(DE-Juel1)180150$$aFischer, Kirsten$$b1$$ufzj
001052373 7001_ $$0P:(DE-Juel1)156459$$aDahmen, David$$b2$$ufzj
001052373 7001_ $$0P:(DE-HGF)0$$aRingel, Zohar$$b3
001052373 7001_ $$0P:(DE-HGF)0$$aKrämer, Michael$$b4
001052373 7001_ $$0P:(DE-Juel1)144806$$aHelias, Moritz$$b5$$ufzj
001052373 8564_ $$uhttps://www.dpg-verhandlungen.de/year/2025/conference/regensburg/part/soe/session/7/contribution/5
001052373 909CO $$ooai:juser.fz-juelich.de:1052373$$pVDB
001052373 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)185990$$aForschungszentrum Jülich$$b0$$kFZJ
001052373 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)180150$$aForschungszentrum Jülich$$b1$$kFZJ
001052373 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)156459$$aForschungszentrum Jülich$$b2$$kFZJ
001052373 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)144806$$aForschungszentrum Jülich$$b5$$kFZJ
001052373 9131_ $$0G:(DE-HGF)POF4-523$$1G:(DE-HGF)POF4-520$$2G:(DE-HGF)POF4-500$$3G:(DE-HGF)POF4$$4G:(DE-HGF)POF$$9G:(DE-HGF)POF4-5232$$aDE-HGF$$bKey Technologies$$lNatural, Artificial and Cognitive Information Processing$$vNeuromorphic Computing and Network Dynamics$$x0
001052373 9131_ $$0G:(DE-HGF)POF4-523$$1G:(DE-HGF)POF4-520$$2G:(DE-HGF)POF4-500$$3G:(DE-HGF)POF4$$4G:(DE-HGF)POF$$9G:(DE-HGF)POF4-5234$$aDE-HGF$$bKey Technologies$$lNatural, Artificial and Cognitive Information Processing$$vNeuromorphic Computing and Network Dynamics$$x1
001052373 9201_ $$0I:(DE-Juel1)IAS-6-20130828$$kIAS-6$$lComputational and Systems Neuroscience$$x0
001052373 980__ $$aconf
001052373 980__ $$aVDB
001052373 980__ $$aI:(DE-Juel1)IAS-6-20130828
001052373 980__ $$aUNRESTRICTED