TY  - CONF
AU  - Lindner, Javed
AU  - Fischer, Kirsten
AU  - Dahmen, David
AU  - Ringel, Zohar
AU  - Krämer, Michael
AU  - Helias, Moritz
TI  - Feature learning in deep neural networks close to criticality
M1  - FZJ-2026-00967
PY  - 2025
AB  - Neural networks excel due to their ability to learn features, yet its theoretical understanding continues to be a field of ongoing research. We develop a finite-width theory for deep non-linear networks, showing that their Bayesian prior is a superposition of Gaussian processes with kernel variances inversely proportional to the network width. In the proportional limit where both network width and training samples scale as N,P→∞ with P/N fixed, we derive forward-backward equations for the maximum a posteriori kernels, demonstrating how layer representations align with targets across network layers. A field-theoretic approach links finite-width corrections of the network kernels to fluctuations of the prior, bridging classical edge-of-chaos theory with feature learning and revealing key interactions between criticality, response, and network scales.
T2  - DPG Spring Meeting of the Condensed Matter Section
CY  - 16 Mar 2025 - 21 Mar 2025, Regensburg (Germany)
Y2  - 16 Mar 2025 - 21 Mar 2025
M2  - Regensburg, Germany
LB  - PUB:(DE-HGF)6
UR  - https://juser.fz-juelich.de/record/1052373
ER  -