Poster (After Call) FZJ-2025-02377

http://join2-wiki.gsi.de/foswiki/pub/Main/Artwork/join2_logo100x88.png
Assessing Neural Manifold Properties With Adapted Normalizing Flows

 ;  ;  ;  ;  ;

2024

Bernstein Conference 2024, FrankfurtFrankfurt, Germany, 29 Sep 2024 - 2 Oct 20242024-09-292024-10-02 [10.12751/NNCN.BC2024.179]

This record in other databases:

Please use a persistent id in citations: doi:

Abstract: Despite the large number of active neurons in the cortex, the activity of neuronal populations is expected to lie on a low-dimensional manifold for different brain regions [1]. Variants of principal component analysis (PCA) are commonly used to assess this manifold. However, these methods are limited by the assumption that the data follows a Gaussian distribution and neglect additional features such as the curvature of the manifold. Hence, their performance as generative models tends to be subpar.To construct a generative model that entirely learns the statistics of neural activity with no assumptions about its distribution, we use Normalizing Flows (NFs) [2, 3]. These neural networks learn an estimator of the probability distribution of the data, based on a latent distribution of the same dimension. Their simplicity and their ability to compute the exact likelihood distinguish them from other generative networks.Our adaptation of NFs focuses on distinguishing between relevant (in manifold) and noise dimensions (out of manifold). We achieve this by identifying principal axes in the latent space. Similar to PCA, we order those axes based on their explanatory power, where we use reconstruction performance instead of explained variance to identify and rank the principal axes. This idea was also explored in [4] with a different loss function. Our adaptation allows us to investigate the behavior of the non-linear principal axes and thus the geometry on which the data lie. This is done by approximating the network for better interpretability as a quadratic mapping around the maximum likelihood modes.We validate our adaptation on artificial data sets of varying complexity where the underlying dimensionality is known. This shows that our approach is able to reconstruct data with only a few latent variables. In this regard it is more efficient than PCA, in addition to achieving a higher likelihood.We apply the method to electrophysiological recordings of V1 and V4 in macaques [5], which have previously been analyzed with a Gaussian Mixture Model [6]. We show that the data lie on a manifold that features two distinct regions, each corresponding to one of the two states, eyes-open and eyes-closed. The shape of the manifold significantly deviates from a Gaussian distribution and thus would not be recoverable with PCA. We further analyze how the non-linear interaction between groups of neurons contributes to the shape of the manifolds.Figure 1: We use Normalizing Flows to learn the distribution of data mapping it to a Gaussian distribution in latent space. Thereby, we enforce an alignment of the latent dimensions to the most informative non-linear axes.

Keyword(s): Computational Neuroscience ; Data analysis, machine learning and neuroinformatics


Contributing Institute(s):
  1. Computational and Systems Neuroscience (IAS-6)
Research Program(s):
  1. 5232 - Computational Principles (POF4-523) (POF4-523)
  2. 5234 - Emerging NC Architectures (POF4-523) (POF4-523)
  3. GRK 2416 - GRK 2416: MultiSenses-MultiScales: Neue Ansätze zur Aufklärung neuronaler multisensorischer Integration (368482240) (368482240)
  4. RenormalizedFlows - Transparent Deep Learning with Renormalized Flows (BMBF-01IS19077A) (BMBF-01IS19077A)

Click to display QR Code for this record

The record appears in these collections:
Document types > Presentations > Poster
Institute Collections > IAS > IAS-6
Workflow collections > Public records
Publications database

 Record created 2025-04-29, last modified 2025-05-05


External link:
Download fulltext
Fulltext
Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)