Journal Article FZJ-2017-03757

http://join2-wiki.gsi.de/foswiki/pub/Main/Artwork/join2_logo100x88.png
Constructing Neuronal Network Models in Massively Parallel Environments

 ;  ;  ;

2017
Frontiers Research Foundation Lausanne

Frontiers in neuroinformatics 11, 30 () [10.3389/fninf.2017.00030]

This record in other databases:      

Please use a persistent id in citations:   doi:

Abstract: Recent advances in the development of data structures to represent spiking neuron network models enable us to exploit the complete memory of petascale computers for a single brain-scale network simulation. In this work, we investigate how well we can exploit the computing power of such supercomputers for the creation of neuronal networks. Using an established benchmark, we divide the runtime of simulation code into the phase of network construction and the phase during which the dynamical state is advanced in time. We find that on multi-core compute nodes network creation scales well with process-parallel code but exhibits a prohibitively large memory consumption. Thread-parallel network creation, in contrast, exhibits speedup only up to a small number of threads but has little overhead in terms of memory. We further observe that the algorithms creating instances of model neurons and their connections scale well for networks of ten thousand neurons, but do not show the same speedup for networks of millions of neurons. Our work uncovers that the lack of scaling of thread-parallel network creation is due to inadequate memory allocation strategies and demonstrates that thread-optimized memory allocators recover excellent scaling. An analysis of the loop order used for network construction reveals that more complex tests on the locality of operations significantly improve scaling and reduce runtime by allowing construction algorithms to step through large networks more efficiently than in existing code. The combination of these techniques increases performance by an order of magnitude and harnesses the increasingly parallel compute power of the compute nodes in high-performance clusters and supercomputers.

Classification:

Contributing Institute(s):
  1. Computational and Systems Neuroscience (INM-6)
  2. Theoretical Neuroscience (IAS-6)
  3. Jara-Institut Brain structure-function relationships (INM-10)
  4. Jülich Supercomputing Center (JSC)
Research Program(s):
  1. 574 - Theory, modelling and simulation (POF3-574) (POF3-574)
  2. 511 - Computational Science and Mathematical Methods (POF3-511) (POF3-511)
  3. Brain-Scale Simulations (jinb33_20121101) (jinb33_20121101)
  4. SMHB - Supercomputing and Modelling for the Human Brain (HGF-SMHB-2013-2017) (HGF-SMHB-2013-2017)
  5. HBP - The Human Brain Project (604102) (604102)
  6. HBP SGA1 - Human Brain Project Specific Grant Agreement 1 (720270) (720270)
  7. SLNS - SimLab Neuroscience (Helmholtz-SLNS) (Helmholtz-SLNS)

Appears in the scientific report 2017
Database coverage:
Medline ; Creative Commons Attribution CC BY 4.0 ; DOAJ ; OpenAccess ; BIOSIS Previews ; DOAJ Seal ; IF < 5 ; JCR ; SCOPUS ; Science Citation Index Expanded ; Thomson Reuters Master Journal List ; Web of Science Core Collection
Click to display QR Code for this record

The record appears in these collections:
Document types > Articles > Journal Article
Institute Collections > INM > INM-10
Institute Collections > IAS > IAS-6
Institute Collections > INM > INM-6
Workflow collections > Public records
Workflow collections > Publication Charges
Institute Collections > JSC
Publications database
Open Access

 Record created 2017-05-23, last modified 2024-03-13