TY - JOUR
AU - Pronold, J.
AU - Jordan, J.
AU - Wylie, B. J. N.
AU - Kitayama, Itaru
AU - Diesmann, M.
AU - Kunkel, Susanne
TI - Routing brain traffic through the von Neumann bottleneck: Efficient cache usage in spiking neural network simulation code on general purpose computers
JO - Parallel computing
VL - 113
SN - 0167-8191
CY - Amsterdam [u.a.]
PB - North-Holland, Elsevier Science
M1 - FZJ-2022-02910
SP - 102952 -
PY - 2022
AB - Simulation is a third pillar next to experiment and theory in the study of complex dynamic systems such as biological neural networks. Contemporary brain-scale networks correspond to directed random graphs of a few million nodes, each with an in-degree and out-degree of several thousands of edges, where nodes and edges correspond to the fundamental biological units, neurons and synapses, respectively. The activity in neuronal networks is also sparse. Each neuron occasionally transmits a brief signal, called spike, via its outgoing synapses to the corresponding target neurons. In distributed computing these targets are scattered across thousands of parallel processes. The spatial and temporal sparsity represents an inherent bottleneck for simulations on conventional computers: irregular memory-access patterns cause poor cache utilization. Using an established neuronal network simulation code as a reference implementation, we investigate how common techniques to recover cache performance such as software-induced prefetching and software pipelining can benefit a real-world application. The algorithmic changes reduce simulation time by up to 50%. The study exemplifies that many-core systems assigned with an intrinsically parallel computational problem can alleviate the von Neumann bottleneck of conventional computer architectures.
LB - PUB:(DE-HGF)16
UR - <Go to ISI:>//WOS:000857033800002
DO - DOI:10.1016/j.parco.2022.102952
UR - https://juser.fz-juelich.de/record/908930
ER -