Poster (After Call) FZJ-2025-02382

http://join2-wiki.gsi.de/foswiki/pub/Main/Artwork/join2_logo100x88.png
Normalizing flows for nonlinear dimensionality reduction ofelectrophysiological recordings

 ;  ;  ;  ;  ;

2023

Computational Neuroscience Academy, CNA 2023, KrakowKrakow, Poland, 17 Jul 2025 - 23 Jul 20252025-07-172025-07-23

Abstract: Even though the cortex has many active neurons, neuronal populations for different brain areasshould dwell on a low-dimensional manifold [1]. Principal component analysis versions are used toestimate this manifold and its dimension. Although successful, these methods assume that the data iswell described by a Gaussian distribution and ignore features like skewness and bimodality. Therefore,they perform poorly as generative models.Normalizing Flows (NFs) allow us to learn neural activity statistics and generate artificial samples [2,3]. These neural networks learn a dimension-preserving estimator of the data’s probability distribution.They are simpler than generative adversarial networks (GANs) and variational autoencoders (VAEs)since they learn only one bijective mapping and can compute the likelihood correctly due to tractableJacobians at each building block.NFs are trained to distinguish relevant (in manifold) from noisy dimensions (out of manifold). To dothis, we break the original symmetry of the latent space by pushing maximal variance of the data to becaptured by as few dimensions as possible — the same idea underpinning PCA, a linear model, adoptedhere for nonlinear mappings. NFs’ unique characteristics allows us to estimate the neural manifold’sdimensions and describe the underlying manifold without discarding any information.Our adaptation is validated on simulated datasets of various complexity created using a hidden man-ifold model with specified dimensions. Reconstructing data with a few latent NF dimensions shows ourapproach’s capability. In this case, our nonlinear approaches outperform linear ones. We identify mani-folds in high-gamma EEG recordings using the aforementioned technique. In the experiment of [4], 128electrodes recorded during four movement tasks. These data show a heavy-tailed distribution along someof the first principal components. NFs can learn higher-order correlations while linear models like PCAare limited to Gaussian statistics. We can also better match features to latent dimensions by flatteningthe latent space. We now have fewer latent dimensions that explain most data variance.References[1] J. A. Gallego, M. G. Perich, L. E. Miller, and S. A. Solla, Neuron, 94(5), 978-984 (2017).[2] L. Dinh, D. Krueger, and Y. Bengio, In Conference on Learning Representations, ICLR (2015).[3] L. Dinh, J. Sohl-Dickstein, and S. Bengio, In Conference on Learning Representations, ICLR (2017).[4] R. T. Schirrmeister, J. T. Springenberg, L. D. J. Fiederer, M. Glasstetter, K. Eggensperger, M. Tanger-mann, ... and T. Ball, Human brain mapping, 38(11), 5391-5420 (2017).


Contributing Institute(s):
  1. Computational and Systems Neuroscience (INM-6)
  2. Computational and Systems Neuroscience (IAS-6)
Research Program(s):
  1. 5232 - Computational Principles (POF4-523) (POF4-523)
  2. 5234 - Emerging NC Architectures (POF4-523) (POF4-523)
  3. GRK 2416 - GRK 2416: MultiSenses-MultiScales: Neue Ansätze zur Aufklärung neuronaler multisensorischer Integration (368482240) (368482240)
  4. RenormalizedFlows - Transparent Deep Learning with Renormalized Flows (BMBF-01IS19077A) (BMBF-01IS19077A)

Click to display QR Code for this record

The record appears in these collections:
Dokumenttypen > Präsentationen > Poster
Institutssammlungen > IAS > IAS-6
Institutssammlungen > INM > INM-6
Workflowsammlungen > Öffentliche Einträge
Publikationsdatenbank

 Datensatz erzeugt am 2025-04-29, letzte Änderung am 2025-05-05


Externer link:
Volltext herunterladen
Volltext
Dieses Dokument bewerten:

Rate this document:
1
2
3
 
(Bisher nicht rezensiert)