Home > Publications database > Efficient simulations of spiking neural networks using NEST GPU |
Lecture (After Call) | FZJ-2025-03047 |
; ; ; ; ; ; ; ; ; ;
2025
This record in other databases:
Please use a persistent id in citations: doi:10.34734/FZJ-2025-03047
Abstract: Efficient simulation of large-scale spiking neuronal networks is important for neuroscientific research, and both the simulation speed and the time it takes to instantiate the network in computer memory are key factors. In recent years, hardware acceleration through highly parallel GPUs has become increasingly popular. NEST GPU is a GPU-based simulator under the NEST Initiative written in CUDA-C++ that demonstrates high simulation speeds with models of various network sizes on single-GPU and multi-GPU systems [1,2,3].Using a single NVIDIA RTX4090 GPU we have simulated networks on the magnitude of 80 thousand neurons and 200 million synapses with a real time factor of 0.4; and using 12000 NVIDIA A100 GPUs on the LEONARDO cluster we have managed to simulate networks on the magnitude of 3.3 billion neurons and 37 trillion synapses with a real time factor of 20.In this showcase, we will demonstrate the capabilities of the GPU simulator and present our roadmap to integrate this technology into the ecosystem of the CPU-based simulator NEST [4].For this, we will focus on three aspects of the simulation across model scales, namely network construction speed, state propagation speed, and energy efficiency.Furthermore, we will present our efforts to statistically validate our simulation results against those of NEST (CPU) using established network models.Additionally, we will also present NESTML [5], a domain-specific modeling language to create new neuron models and automatically generate code for the NEST GPU backend.Lastly, the current state of the technology in terms of available features and interfaces will be shown as well as the roadmap for full integration to the NEST ecosystem.[1] Golosio et al. Front. Comput. Neurosci. 15:627620, 2021.[2] Tiddia et al. Front. Neuroinform. 16:883333, 2022.[3] Golosio et al. Appl. Sci. 13, 9598, 2023.[4] Graber, S., et al. NEST 3.8 (3.8). Zenodo. 10.5281/zenodo.12624784[5] https://nestml.readthedocs.io/
![]() |
The record appears in these collections: |