%0 Conference Paper
%A Lindner, Javed
%A Fischer, Kirsten
%A Dahmen, David
%A Ringel, Zohar
%A Krämer, Michael
%A Helias, Moritz
%T Feature learning in deep neural networks close to criticality
%M FZJ-2026-00967
%D 2025
%X Neural networks excel due to their ability to learn features, yet its theoretical understanding continues to be a field of ongoing research. We develop a finite-width theory for deep non-linear networks, showing that their Bayesian prior is a superposition of Gaussian processes with kernel variances inversely proportional to the network width. In the proportional limit where both network width and training samples scale as N,P→∞ with P/N fixed, we derive forward-backward equations for the maximum a posteriori kernels, demonstrating how layer representations align with targets across network layers. A field-theoretic approach links finite-width corrections of the network kernels to fluctuations of the prior, bridging classical edge-of-chaos theory with feature learning and revealing key interactions between criticality, response, and network scales.
%B DPG Spring Meeting of the Condensed Matter Section
%C 16 Mar 2025 - 21 Mar 2025, Regensburg (Germany)
Y2 16 Mar 2025 - 21 Mar 2025
M2 Regensburg, Germany
%F PUB:(DE-HGF)6
%9 Conference Presentation
%U https://juser.fz-juelich.de/record/1052373