Hauptseite > Publikationsdatenbank > Preparing for exascale computing: Large-scale neuronal network construction through parallel GPU memory instantiation |
Poster (After Call) | FZJ-2024-05342 |
; ; ; ; ; ; ; ; ; ; ;
2024
This record in other databases:
Please use a persistent id in citations: doi:10.34734/FZJ-2024-05342
Abstract: Efficient simulation of large-scale spiking neuronal networks is important for neuroscientific research, and both the simulation speed and the time it takes to instantiate the network in computer memory are key factors. NEST GPU demonstrates high simulation speeds with models of various network sizes on single-GPU and multi-GPU systems[1,2]. Using a single GPU, networks on the order of 10^5 neurons and 10^9 synapses can already be instantiated in less than a second[3]. On the path toward models of the whole brain, neuroscientists show an increasing interest in studying networks that are larger by several orders of magnitude. However, the time needed to construct such large network models was so far a restrictive factor for simulating them. With the aim to fully exploit available and upcoming computing resources for computational neuroscience, we here propose a novel method to efficiently instantiate large networks on multiple GPUs in parallel. Our approach relies on the determinism dependent on the initial state of pseudo-random number generators (PRNGs). Starting from a unique common master RNG seed, a two-dimensional array of seeds is generated, with one seed for each possible pair of source-target MPI processes. These seeds are used to generate the connectivity between each of such pairs. The connections are stored only in the GPU memory of the target MPI process. By synchronising the construction directives, each MPI process does not need to share information on the obtained connectivity after each instruction but can construct its relevant connections by generating the same sequence of random states as the other MPI processes. The method is evaluated through a two-population recurrently connected network designed for benchmarking a variety of commonly used high-level connection rules[4]. Furthermore, we validate the simulation performance with a multi-area model of macaque vision-related cortex[2,5], made up of about 4 million neurons and 24 billion synapses. Lastly we compare our results with other state-of-the-art simulation technologies across varying network sizes using a highly scalable network model[6].[1] Golosio et al. Front. Comput. Neurosci. 15:627620, 2021.[2] Tiddia et al. Front. Neuroinform. 16:883333, 2022.[3] Golosio et al. Appl. Sci. 13, 9598, 2023.[4] Senk et al. PLoS Comput Biol. 18(9): e1010086. 2022.[5] Schmidt et al. PLoS Comput Biol. 14(10): e1006359, 2018.[6] Kunkel et al. Front. Neuroinform. 8:78, 2014.
![]() |
The record appears in these collections: |