001     906961
005     20240313103131.0
024 7 _ |2 arXiv
|a arXiv:2203.11355
024 7 _ |2 doi
|a 10.48550/arXiv.2203.11355
024 7 _ |2 Handle
|a 2128/31010
037 _ _ |a FZJ-2022-01779
088 _ _ |2 arXiv
|a arXiv:2203.11355
100 1 _ |0 P:(DE-Juel1)171384
|a Keup, Christian
|b 0
|e Corresponding author
|u fzj
245 _ _ |a Origami in N dimensions: How feed-forward networks manufacture linear separability
260 _ _ |b arXiv
|c 2022
336 7 _ |0 PUB:(DE-HGF)25
|2 PUB:(DE-HGF)
|a Preprint
|b preprint
|m preprint
|s 1649410110_12135
336 7 _ |2 ORCID
|a WORKING_PAPER
336 7 _ |0 28
|2 EndNote
|a Electronic Article
336 7 _ |2 DRIVER
|a preprint
336 7 _ |2 BibTeX
|a ARTICLE
336 7 _ |2 DataCite
|a Output Types/Working Paper
520 _ _ |a Neural networks can implement arbitrary functions. But, mechanistically, what are the tools at their disposal to construct the target? For classification tasks, the network must transform the data classes into a linearly separable representation in the final hidden layer. We show that a feed-forward architecture has one primary tool at hand to achieve this separability: progressive folding of the data manifold in unoccupied higher dimensions. The operation of folding provides a useful intuition in low-dimensions that generalizes to high ones. We argue that an alternative method based on shear, requiring very deep architectures, plays only a small role in real-world networks. The folding operation, however, is powerful as long as layers are wider than the data dimensionality, allowing efficient solutions by providing access to arbitrary regions in the distribution, such as data points of one class forming islands within the other classes. We argue that a link exists between the universal approximation property in ReLU networks and the fold-and-cut theorem (Demaine et al., 1998) dealing with physical paper folding. Based on the mechanistic insight, we predict that the progressive generation of separability is necessarily accompanied by neurons showing mixed selectivity and bimodal tuning curves. This is validated in a network trained on the poker hand task, showing the emergence of bimodal tuning curves during training. We hope that our intuitive picture of the data transformation in deep networks can help to provide interpretability, and discuss possible applications to the theory of convolutional networks, loss landscapes, and generalization. TL;DR: Shows that the internal processing of deep networks can be thought of as literal folding operations on the data distribution in the N-dimensional activation space. A link to a well-known theorem in origami theory is provided.
536 _ _ |a 5232 - Computational Principles (POF4-523)
|0 G:(DE-HGF)POF4-5232
|c POF4-523
|x 0
|f POF IV
536 _ _ |a RenormalizedFlows - Transparent Deep Learning with Renormalized Flows (BMBF-01IS19077A)
|0 G:(DE-Juel-1)BMBF-01IS19077A
|c BMBF-01IS19077A
|x 1
536 _ _ |a neuroIC002 - Recurrence and stochasticity for neuro-inspired computation (EXS-SF-neuroIC002)
|0 G:(DE-82)EXS-SF-neuroIC002
|c EXS-SF-neuroIC002
|x 2
536 _ _ |a SDS005 - Towards an integrated data science of complex natural systems (PF-JARA-SDS005)
|0 G:(DE-Juel-1)PF-JARA-SDS005
|c PF-JARA-SDS005
|x 3
536 _ _ |a GRK 2416 - GRK 2416: MultiSenses-MultiScales: Neue Ansätze zur Aufklärung neuronaler multisensorischer Integration (368482240)
|0 G:(GEPRIS)368482240
|c 368482240
|x 4
588 _ _ |a Dataset connected to arXivarXiv
650 _ 7 |2 Other
|a Machine Learning (cs.LG)
650 _ 7 |2 Other
|a Disordered Systems and Neural Networks (cond-mat.dis-nn)
650 _ 7 |2 Other
|a Machine Learning (stat.ML)
650 _ 7 |2 Other
|a FOS: Computer and information sciences
650 _ 7 |2 Other
|a FOS: Physical sciences
700 1 _ |0 P:(DE-Juel1)144806
|a Helias, Moritz
|b 1
|u fzj
773 _ _ |a 10.48550/arXiv.2203.11355
856 4 _ |u https://juser.fz-juelich.de/record/906961/files/Keup2022%20-%20Origami%20in%20N%20dimensions_%20How%20feed-forward%20networks%20manufacture%20linear%20separability.pdf
|y OpenAccess
909 C O |o oai:juser.fz-juelich.de:906961
|p openaire
|p open_access
|p VDB
|p driver
|p dnbdelivery
910 1 _ |0 I:(DE-588b)5008462-8
|6 P:(DE-Juel1)171384
|a Forschungszentrum Jülich
|b 0
|k FZJ
910 1 _ |0 I:(DE-588b)5008462-8
|6 P:(DE-Juel1)144806
|a Forschungszentrum Jülich
|b 1
|k FZJ
913 1 _ |0 G:(DE-HGF)POF4-523
|1 G:(DE-HGF)POF4-520
|2 G:(DE-HGF)POF4-500
|3 G:(DE-HGF)POF4
|4 G:(DE-HGF)POF
|9 G:(DE-HGF)POF4-5232
|a DE-HGF
|b Key Technologies
|l Natural, Artificial and Cognitive Information Processing
|v Neuromorphic Computing and Network Dynamics
|x 0
914 1 _ |y 2022
915 _ _ |0 StatID:(DE-HGF)0510
|2 StatID
|a OpenAccess
920 _ _ |l yes
920 1 _ |0 I:(DE-Juel1)INM-6-20090406
|k INM-6
|l Computational and Systems Neuroscience
|x 0
920 1 _ |0 I:(DE-Juel1)IAS-6-20130828
|k IAS-6
|l Theoretical Neuroscience
|x 1
920 1 _ |0 I:(DE-Juel1)INM-10-20170113
|k INM-10
|l Jara-Institut Brain structure-function relationships
|x 2
980 1 _ |a FullTexts
980 _ _ |a preprint
980 _ _ |a VDB
980 _ _ |a UNRESTRICTED
980 _ _ |a I:(DE-Juel1)INM-6-20090406
980 _ _ |a I:(DE-Juel1)IAS-6-20130828
980 _ _ |a I:(DE-Juel1)INM-10-20170113
981 _ _ |a I:(DE-Juel1)IAS-6-20130828


LibraryCollectionCLSMajorCLSMinorLanguageAuthor
Marc 21