001009719 001__ 1009719
001009719 005__ 20240313103110.0
001009719 0247_ $$2datacite_doi$$a10.34734/FZJ-2023-02951
001009719 037__ $$aFZJ-2023-02951
001009719 1001_ $$0P:(DE-Juel1)178725$$aBouss, Peter$$b0$$eCorresponding author$$ufzj
001009719 1112_ $$a32nd Annual Computational Neuroscience Meeting$$cLeipzig$$d2023-07-15 - 2023-07-19$$gCNS 2023$$wGermany
001009719 245__ $$aNonlinear dimensionality reduction with normalizing flows for analysis of electrophysiological recordings
001009719 260__ $$c2023
001009719 3367_ $$033$$2EndNote$$aConference Paper
001009719 3367_ $$2BibTeX$$aINPROCEEDINGS
001009719 3367_ $$2DRIVER$$aconferenceObject
001009719 3367_ $$2ORCID$$aCONFERENCE_POSTER
001009719 3367_ $$2DataCite$$aOutput Types/Conference Poster
001009719 3367_ $$0PUB:(DE-HGF)24$$2PUB:(DE-HGF)$$aPoster$$bposter$$mposter$$s1704365624_23388$$xAfter Call
001009719 520__ $$aDespite the large number of active neurons in the cortex, the activity of neural populations for different brain regions is expected to live on a low-dimensional manifold [1]. Among the most common tools to estimate the mapping to this manifold, along with its dimension, are variants of principal component analysis. Although their success is undisputed, these methods still have the disadvantage of assuming that the data is well described by a Gaussian distribution; any additional features such as skewness or bimodality are neglected. Their performance when used as a generative model is therefore often poor.To fully learn the statistics of neural activity and to generate artificial samples, we use Normalizing Flows (NFs) [2, 3]. These neural networks learn a dimension-preserving estimator of the probability distribution of the data (left part of Fig. 1). They differ from generative adversarial networks (GANs) and variational autoencoders (VAEs) by their simplicity – only one bijective mapping is learned – and by their ability to compute the likelihood exactly due to tractable Jacobians at each building block.We adapt the training objective of NFs to discriminate between relevant (in manifold) and noise dimensions (out of manifold). To do this, we break the original symmetry of the latent space by enforcing maximal variance of the data to be encoded by as few dimensions as possible (right part of Fig. 1) - the same idea underlying PCA, a linear model, adapted here for nonlinear mappings. This allows us to estimate the dimensionality of the neural manifold and even to describe the underlying manifold without discarding any information, a unique feature of NFs.We prove the validity of our adaptation on artificial datasets of varying complexity generated by a hidden manifold model where the underlying dimensionality is known. We illustrate the power of our approach by reconstructing data using only a few latent NF dimensions. In this setting, we show the advantage of such a nonlinear approach over linear methods.Following this approach, we identify manifolds in EEG recordings from a dataset featuring high gamma activity. As described in [4], these recordings are obtained from 128 electrodes during four movement tasks. When plotted along the first principal components obtained by PCA, these data show for some PCs a heavy-tailed distribution. While linear models such as PCA are limited to Gaussian statistics and hence suboptimal in such a case, the nonlinearity of NFs enable to learn higher-order correlations. Moreover, by flattening out the curvature in latent space, we can better associate features with latent dimensions. Especially, we have now a reduced set of latent dimensions that explain most of the data variance.References1. Gallego J, Perich M, Miller L, et al. Neural manifolds for the control of movement. 2017. Neuron, 94(5), 978-984.2. Dinh L, Krueger D, Bengio Y. Nice: Non-linear Independent Components Estimation. ICLR 2015.3. Dinh L, Sohl-Dickstein J, Bengio S. Density estimation using Real NVP. ICLR 2017.4. Schirrmeister R, Springenberg J, Fiederer L, et al. Deep learning with convolutional neural networks for EEG decoding and visualization. 2017. Hum Brain Mapp, 38(11), 5391-5420.
001009719 536__ $$0G:(DE-HGF)POF4-5231$$a5231 - Neuroscientific Foundations (POF4-523)$$cPOF4-523$$fPOF IV$$x0
001009719 536__ $$0G:(DE-HGF)POF4-5232$$a5232 - Computational Principles (POF4-523)$$cPOF4-523$$fPOF IV$$x1
001009719 536__ $$0G:(DE-HGF)POF4-5234$$a5234 - Emerging NC Architectures (POF4-523)$$cPOF4-523$$fPOF IV$$x2
001009719 536__ $$0G:(GEPRIS)368482240$$aGRK 2416 - GRK 2416: MultiSenses-MultiScales: Neue Ansätze zur Aufklärung neuronaler multisensorischer Integration (368482240)$$c368482240$$x3
001009719 536__ $$0G:(DE-Juel-1)BMBF-01IS19077A$$aRenormalizedFlows - Transparent Deep Learning with Renormalized Flows (BMBF-01IS19077A)$$cBMBF-01IS19077A$$x4
001009719 7001_ $$0P:(DE-Juel1)174585$$aNestler, Sandra$$b1$$ufzj
001009719 7001_ $$0P:(DE-Juel1)180150$$aFischer, Kirsten$$b2$$ufzj
001009719 7001_ $$0P:(DE-Juel1)184900$$aMerger, Claudia Lioba$$b3$$ufzj
001009719 7001_ $$0P:(DE-Juel1)178936$$aRene, Alexandre$$b4$$ufzj
001009719 7001_ $$0P:(DE-Juel1)144806$$aHelias, Moritz$$b5$$ufzj
001009719 8564_ $$uhttps://juser.fz-juelich.de/record/1009719/files/PosterNonlinearDimReduction.pdf$$yOpenAccess
001009719 909CO $$ooai:juser.fz-juelich.de:1009719$$pdriver$$pVDB$$popen_access$$popenaire
001009719 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)178725$$aForschungszentrum Jülich$$b0$$kFZJ
001009719 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)174585$$aForschungszentrum Jülich$$b1$$kFZJ
001009719 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)180150$$aForschungszentrum Jülich$$b2$$kFZJ
001009719 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)184900$$aForschungszentrum Jülich$$b3$$kFZJ
001009719 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)178936$$aForschungszentrum Jülich$$b4$$kFZJ
001009719 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)144806$$aForschungszentrum Jülich$$b5$$kFZJ
001009719 9131_ $$0G:(DE-HGF)POF4-523$$1G:(DE-HGF)POF4-520$$2G:(DE-HGF)POF4-500$$3G:(DE-HGF)POF4$$4G:(DE-HGF)POF$$9G:(DE-HGF)POF4-5231$$aDE-HGF$$bKey Technologies$$lNatural, Artificial and Cognitive Information Processing$$vNeuromorphic Computing and Network Dynamics$$x0
001009719 9131_ $$0G:(DE-HGF)POF4-523$$1G:(DE-HGF)POF4-520$$2G:(DE-HGF)POF4-500$$3G:(DE-HGF)POF4$$4G:(DE-HGF)POF$$9G:(DE-HGF)POF4-5232$$aDE-HGF$$bKey Technologies$$lNatural, Artificial and Cognitive Information Processing$$vNeuromorphic Computing and Network Dynamics$$x1
001009719 9131_ $$0G:(DE-HGF)POF4-523$$1G:(DE-HGF)POF4-520$$2G:(DE-HGF)POF4-500$$3G:(DE-HGF)POF4$$4G:(DE-HGF)POF$$9G:(DE-HGF)POF4-5234$$aDE-HGF$$bKey Technologies$$lNatural, Artificial and Cognitive Information Processing$$vNeuromorphic Computing and Network Dynamics$$x2
001009719 9141_ $$y2023
001009719 915__ $$0StatID:(DE-HGF)0510$$2StatID$$aOpenAccess
001009719 920__ $$lyes
001009719 9201_ $$0I:(DE-Juel1)INM-6-20090406$$kINM-6$$lComputational and Systems Neuroscience$$x0
001009719 9201_ $$0I:(DE-Juel1)IAS-6-20130828$$kIAS-6$$lTheoretical Neuroscience$$x1
001009719 9201_ $$0I:(DE-Juel1)INM-10-20170113$$kINM-10$$lJara-Institut Brain structure-function relationships$$x2
001009719 9801_ $$aFullTexts
001009719 980__ $$aposter
001009719 980__ $$aVDB
001009719 980__ $$aUNRESTRICTED
001009719 980__ $$aI:(DE-Juel1)INM-6-20090406
001009719 980__ $$aI:(DE-Juel1)IAS-6-20130828
001009719 980__ $$aI:(DE-Juel1)INM-10-20170113
001009719 981__ $$aI:(DE-Juel1)IAS-6-20130828