Hauptseite > Publikationsdatenbank > Recurrent network dynamics underlying transient sensory stimulus representations in mice |
Poster (After Call) | FZJ-2025-03252 |
; ; ; ; ;
2025
Abstract: Different stimuli elicit different transient neural responses in the brain. How is theinformation represented in the parallel neuronal activity and how is it reshaped by thedynamics of local recurrent circuits? We investigate these questions in Neuropixels recordings of awake behaving mice and recurrent neural network models by inferring the stimulusclass from the network activity.We employ methods from statistical physics of disordered systems to derive a two-replica mean-field theory that reduces complex network dynamics to three dynamicalquantities that fully determine the separability of stimulus representations. These dynamical quantities are the mean population activity $R$ and the overlaps $Q^{=}$ and $Q^{\neq}$,representing response variability within or across stimulus classes, respectively.Mean-field theory predicts the time evolution of $R$, $Q^{=}$, and $Q^{\neq}$ and enables us to quantitatively explain experimental observables. The analytical theory predicts the temporaldynamics of stimulus separability as an interplay of firing rate dynamics, controlled byinhibitory balancing, and overlaps, governed by chaotic dynamics.The analysis of mutual information of an optimally trained readout on the populationsignal reveals a trade-off between more information conveyed with an increasing numberof stimuli, and stimuli becoming less separable due to their increased overlap in the finite-dimensional neuronal space. We find that the experimentally observed small populationactivity $R$ lies in a regime where information grows with the number of stimuli, which issharply separated from a second regime, in which information converges to zero, revealinga crucial advantage of sparse coding.
![]() |
The record appears in these collections: |