001     1007805
005     20250603202305.0
024 7 _ |a 10.12751/NNCN.BC2022.104
|2 doi
024 7 _ |a 10.34734/FZJ-2023-02199
|2 datacite_doi
037 _ _ |a FZJ-2023-02199
041 _ _ |a English
100 1 _ |a Bouss, Peter
|0 P:(DE-Juel1)178725
|b 0
|e Corresponding author
|u fzj
111 2 _ |a Bernstein Conference
|c Berlin
|d 2022-09-13 - 2022-09-16
|w Germany
245 _ _ |a Dimensionality reduction with normalizing flows
260 _ _ |c 2022
336 7 _ |a Conference Paper
|0 33
|2 EndNote
336 7 _ |a INPROCEEDINGS
|2 BibTeX
336 7 _ |a conferenceObject
|2 DRIVER
336 7 _ |a CONFERENCE_POSTER
|2 ORCID
336 7 _ |a Output Types/Conference Poster
|2 DataCite
336 7 _ |a Poster
|b poster
|m poster
|0 PUB:(DE-HGF)24
|s 1748950919_19036
|2 PUB:(DE-HGF)
|x After Call
500 _ _ |a Copyright: © (2022) Bouss P, Nestler S, René A, Helias M
502 _ _ |c RWTH Aachen
520 _ _ |a Despite the large number of active neurons in the cortex, for various brain regions, the activity of neural populations is expected to live on a low-dimensional manifold [1]. Among the most common tools to estimate the mapping to this manifold, along with its dimension, are many variants of principal component analysis [2]. Despite their apparent success, these procedures have the disadvantage that they assume only linear correlations and that their performance, when used as a generative model, is poor.To be able to fully learn the statistics of neural activity and to generate artificial samples, we make use of normalizing flows (NFs) [3, 4, 5]. These neural networks learn a dimension-preserving estimator of the data probability distribution. They are outstanding in comparison to generative adversarial networks (GANs) and variational autoencoders (VAEs) for their simplicity ‒ only one invertible network is learned ‒ and for their exact estimation of the likelihood due to tractable Jacobians at each building block.We aim to modify NFs such that they can discriminate relevant (in manifold) from noise (out of manifold) dimensions. To this end, we penalize the participation of each single latent variable in the reconstruction of the data through the inverse mapping (following a different reasoning than [6]). We can thus not only give an estimate of the dimensionality of the activity sub-space but also describe the underlying manifold without the need to discard any information.We prove the validity of our modification on controlled data sets of different complexity. We emphasize, in particular, differences between affine and additive coupling layers in normalizing flows [7], and show that the former lead to pathologies when the data topology is non-trivial, or when the data set is composed of classes with different volumes. We further illustrate the power of our modified NFs by reconstructing data using only a few dimensions.We finally apply this technique to identify manifolds in EEG recordings from a dataset showing high gamma activity (described in [8]), obtained from 128 electrodes during four different movement tasks.AcknowledgementsThis project is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 368482240/GRK2416; and by the German Federal Ministry for Education and Research (BMBF Grant 01IS19077A to Jülich).References [1] Gao, P., Trautmann, E., Yu, B., Santhanam, G., Ryu, S., Shenoy, K., & Ganguli, S. (2017). A theory of multineuronal dimensionality, dynamics and measurement. BioRxiv, 214262., 10.1101/214262 [2] Gallego, J. A., Perich, M. G., Miller, L. E., & Solla, S. A. (2017). Neural manifolds for the control of movement. Neuron, 94(5), 978-984., 10.1016/j.neuron.2017.05.025 [3] Dinh, L., Krueger, D., & Bengio, Y. (2014). Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516., 10.48550/arXiv.1410.8516 [4] Dinh, L., Sohl-Dickstein, J., & Bengio, S. (2016). Density estimation using real nvp. arXiv preprint arXiv:1605.08803., 10.48550/arXiv.1605.08803 [5] Kingma, D. P., & Dhariwal, P. (2018). Glow: Generative flow with invertible 1x1 convolutions. Advances in neural information processing systems, 31. [6] Cunningham, E., Cobb, A., & Jha, S. (2022). Principal manifold flows. arXiv preprint arXiv:2202.07037., 10.48550/arXiv.2202.07037 [7] Behrmann, J., Vicol, P., Wang, K. C., Grosse, R., & Jacobsen, J. H. (2021). Understanding and mitigating exploding inverses in invertible neural networks. In International Conference on Artificial Intelligence and Statistics (pp. 1792-1800). PMLR. [8] Schirrmeister, R. T., Springenberg, J. T., Fiederer, L. D. J., Glasstetter, M., Eggensperger, K., Tangermann, M., ... & Ball, T. (2017). Deep learning with convolutional neural networks for EEG decoding and visualization. Human brain mapping, 38(11), 5391-5420., 10.1002/hbm.23730
536 _ _ |a 5231 - Neuroscientific Foundations (POF4-523)
|0 G:(DE-HGF)POF4-5231
|c POF4-523
|f POF IV
|x 0
536 _ _ |a 5232 - Computational Principles (POF4-523)
|0 G:(DE-HGF)POF4-5232
|c POF4-523
|f POF IV
|x 1
536 _ _ |a GRK 2416 - GRK 2416: MultiSenses-MultiScales: Neue Ansätze zur Aufklärung neuronaler multisensorischer Integration (368482240)
|0 G:(GEPRIS)368482240
|c 368482240
|x 2
536 _ _ |a RenormalizedFlows - Transparent Deep Learning with Renormalized Flows (BMBF-01IS19077A)
|0 G:(DE-Juel-1)BMBF-01IS19077A
|c BMBF-01IS19077A
|x 3
588 _ _ |a Dataset connected to DataCite
650 _ 7 |a Computational Neuroscience
|2 Other
650 _ 7 |a Data analysis, machine learning, neuroinformatics
|2 Other
700 1 _ |a Nestler, Sandra
|0 P:(DE-Juel1)174585
|b 1
|u fzj
700 1 _ |a Rene, Alexandre
|0 P:(DE-Juel1)178936
|b 2
|u fzj
700 1 _ |a Helias, Moritz
|0 P:(DE-Juel1)144806
|b 3
|e Last author
|u fzj
773 _ _ |a 10.12751/NNCN.BC2022.104
856 4 _ |u http://doi.org/10.12751/nncn.bc2022.104
856 4 _ |u https://juser.fz-juelich.de/record/1007805/files/Abstract.pdf
|y Restricted
856 4 _ |u https://juser.fz-juelich.de/record/1007805/files/Poster.pdf
|y OpenAccess
909 C O |o oai:juser.fz-juelich.de:1007805
|p openaire
|p open_access
|p VDB
|p driver
910 1 _ |a Forschungszentrum Jülich
|0 I:(DE-588b)5008462-8
|k FZJ
|b 0
|6 P:(DE-Juel1)178725
910 1 _ |a RWTH Aachen
|0 I:(DE-588b)36225-6
|k RWTH
|b 0
|6 P:(DE-Juel1)178725
910 1 _ |a Forschungszentrum Jülich
|0 I:(DE-588b)5008462-8
|k FZJ
|b 1
|6 P:(DE-Juel1)174585
910 1 _ |a RWTH Aachen
|0 I:(DE-588b)36225-6
|k RWTH
|b 1
|6 P:(DE-Juel1)174585
910 1 _ |a Forschungszentrum Jülich
|0 I:(DE-588b)5008462-8
|k FZJ
|b 2
|6 P:(DE-Juel1)178936
910 1 _ |a RWTH Aachen
|0 I:(DE-588b)36225-6
|k RWTH
|b 2
|6 P:(DE-Juel1)178936
910 1 _ |a University of Ottawa
|0 I:(DE-HGF)0
|b 2
|6 P:(DE-Juel1)178936
910 1 _ |a Forschungszentrum Jülich
|0 I:(DE-588b)5008462-8
|k FZJ
|b 3
|6 P:(DE-Juel1)144806
910 1 _ |a RWTH Aachen
|0 I:(DE-588b)36225-6
|k RWTH
|b 3
|6 P:(DE-Juel1)144806
913 1 _ |a DE-HGF
|b Key Technologies
|l Natural, Artificial and Cognitive Information Processing
|1 G:(DE-HGF)POF4-520
|0 G:(DE-HGF)POF4-523
|3 G:(DE-HGF)POF4
|2 G:(DE-HGF)POF4-500
|4 G:(DE-HGF)POF
|v Neuromorphic Computing and Network Dynamics
|9 G:(DE-HGF)POF4-5231
|x 0
913 1 _ |a DE-HGF
|b Key Technologies
|l Natural, Artificial and Cognitive Information Processing
|1 G:(DE-HGF)POF4-520
|0 G:(DE-HGF)POF4-523
|3 G:(DE-HGF)POF4
|2 G:(DE-HGF)POF4-500
|4 G:(DE-HGF)POF
|v Neuromorphic Computing and Network Dynamics
|9 G:(DE-HGF)POF4-5232
|x 1
914 1 _ |y 2023
915 _ _ |a OpenAccess
|0 StatID:(DE-HGF)0510
|2 StatID
920 _ _ |l yes
920 1 _ |0 I:(DE-Juel1)INM-6-20090406
|k INM-6
|l Computational and Systems Neuroscience
|x 0
920 1 _ |0 I:(DE-Juel1)IAS-6-20130828
|k IAS-6
|l Computational and Systems Neuroscience
|x 1
920 1 _ |0 I:(DE-Juel1)INM-10-20170113
|k INM-10
|l Jara-Institut Brain structure-function relationships
|x 2
980 _ _ |a poster
980 _ _ |a VDB
980 _ _ |a I:(DE-Juel1)INM-6-20090406
980 _ _ |a I:(DE-Juel1)IAS-6-20130828
980 _ _ |a I:(DE-Juel1)INM-10-20170113
980 _ _ |a UNRESTRICTED
980 1 _ |a FullTexts
981 _ _ |a I:(DE-Juel1)IAS-6-20130828


LibraryCollectionCLSMajorCLSMinorLanguageAuthor
Marc 21