Preprint FZJ-2022-01779

http://join2-wiki.gsi.de/foswiki/pub/Main/Artwork/join2_logo100x88.png
Origami in N dimensions: How feed-forward networks manufacture linear separability

 ;

2022
arXiv

arXiv () [10.48550/arXiv.2203.11355]

This record in other databases:

Please use a persistent id in citations:   doi:

Report No.: arXiv:2203.11355

Abstract: Neural networks can implement arbitrary functions. But, mechanistically, what are the tools at their disposal to construct the target? For classification tasks, the network must transform the data classes into a linearly separable representation in the final hidden layer. We show that a feed-forward architecture has one primary tool at hand to achieve this separability: progressive folding of the data manifold in unoccupied higher dimensions. The operation of folding provides a useful intuition in low-dimensions that generalizes to high ones. We argue that an alternative method based on shear, requiring very deep architectures, plays only a small role in real-world networks. The folding operation, however, is powerful as long as layers are wider than the data dimensionality, allowing efficient solutions by providing access to arbitrary regions in the distribution, such as data points of one class forming islands within the other classes. We argue that a link exists between the universal approximation property in ReLU networks and the fold-and-cut theorem (Demaine et al., 1998) dealing with physical paper folding. Based on the mechanistic insight, we predict that the progressive generation of separability is necessarily accompanied by neurons showing mixed selectivity and bimodal tuning curves. This is validated in a network trained on the poker hand task, showing the emergence of bimodal tuning curves during training. We hope that our intuitive picture of the data transformation in deep networks can help to provide interpretability, and discuss possible applications to the theory of convolutional networks, loss landscapes, and generalization. TL;DR: Shows that the internal processing of deep networks can be thought of as literal folding operations on the data distribution in the N-dimensional activation space. A link to a well-known theorem in origami theory is provided.

Keyword(s): Machine Learning (cs.LG) ; Disordered Systems and Neural Networks (cond-mat.dis-nn) ; Machine Learning (stat.ML) ; FOS: Computer and information sciences ; FOS: Physical sciences


Contributing Institute(s):
  1. Computational and Systems Neuroscience (INM-6)
  2. Theoretical Neuroscience (IAS-6)
  3. Jara-Institut Brain structure-function relationships (INM-10)
Research Program(s):
  1. 5232 - Computational Principles (POF4-523) (POF4-523)
  2. RenormalizedFlows - Transparent Deep Learning with Renormalized Flows (BMBF-01IS19077A) (BMBF-01IS19077A)
  3. neuroIC002 - Recurrence and stochasticity for neuro-inspired computation (EXS-SF-neuroIC002) (EXS-SF-neuroIC002)
  4. SDS005 - Towards an integrated data science of complex natural systems (PF-JARA-SDS005) (PF-JARA-SDS005)
  5. GRK 2416 - GRK 2416: MultiSenses-MultiScales: Neue Ansätze zur Aufklärung neuronaler multisensorischer Integration (368482240) (368482240)

Appears in the scientific report 2022
Database coverage:
OpenAccess
Click to display QR Code for this record

The record appears in these collections:
Institute Collections > INM > INM-10
Institute Collections > IAS > IAS-6
Institute Collections > INM > INM-6
Document types > Reports > Preprints
Workflow collections > Public records
Publications database
Open Access

 Record created 2022-03-28, last modified 2024-03-13