Home > Publications database > Assessing Neural Manifold Properties With Adapted Normalizing Flows |
Poster (After Call) | FZJ-2025-02377 |
; ; ; ; ;
2024
This record in other databases:
Please use a persistent id in citations: doi:10.12751/NNCN.BC2024.179
Abstract: Despite the large number of active neurons in the cortex, the activity of neuronal populations is expected to lie on a low-dimensional manifold for different brain regions [1]. Variants of principal component analysis (PCA) are commonly used to assess this manifold. However, these methods are limited by the assumption that the data follows a Gaussian distribution and neglect additional features such as the curvature of the manifold. Hence, their performance as generative models tends to be subpar.To construct a generative model that entirely learns the statistics of neural activity with no assumptions about its distribution, we use Normalizing Flows (NFs) [2, 3]. These neural networks learn an estimator of the probability distribution of the data, based on a latent distribution of the same dimension. Their simplicity and their ability to compute the exact likelihood distinguish them from other generative networks.Our adaptation of NFs focuses on distinguishing between relevant (in manifold) and noise dimensions (out of manifold). We achieve this by identifying principal axes in the latent space. Similar to PCA, we order those axes based on their explanatory power, where we use reconstruction performance instead of explained variance to identify and rank the principal axes. This idea was also explored in [4] with a different loss function. Our adaptation allows us to investigate the behavior of the non-linear principal axes and thus the geometry on which the data lie. This is done by approximating the network for better interpretability as a quadratic mapping around the maximum likelihood modes.We validate our adaptation on artificial data sets of varying complexity where the underlying dimensionality is known. This shows that our approach is able to reconstruct data with only a few latent variables. In this regard it is more efficient than PCA, in addition to achieving a higher likelihood.We apply the method to electrophysiological recordings of V1 and V4 in macaques [5], which have previously been analyzed with a Gaussian Mixture Model [6]. We show that the data lie on a manifold that features two distinct regions, each corresponding to one of the two states, eyes-open and eyes-closed. The shape of the manifold significantly deviates from a Gaussian distribution and thus would not be recoverable with PCA. We further analyze how the non-linear interaction between groups of neurons contributes to the shape of the manifolds.Figure 1: We use Normalizing Flows to learn the distribution of data mapping it to a Gaussian distribution in latent space. Thereby, we enforce an alignment of the latent dimensions to the most informative non-linear axes.
Keyword(s): Computational Neuroscience ; Data analysis, machine learning and neuroinformatics
![]() |
The record appears in these collections: |