Journal Article FZJ-2019-03835

http://join2-wiki.gsi.de/foswiki/pub/Main/Artwork/join2_logo100x88.png
A closed-loop toolchain for neural network simulations of learning autonomous agents

 ;  ;

2019
Frontiers Research Foundation Lausanne

Frontiers in computational neuroscience 13, 46 () [10.3389/fncom.2019.00046]

This record in other databases:      

Please use a persistent id in citations:   doi:

Abstract: Neural network simulation is an important tool for generating and evaluating hypotheses on the structure, dynamics and function of neural circuits. For scientific questions addressing organisms operating autonomously in their environments, in particular where learning is involved, it is crucial to be able to operate such simulations in a closed-loop fashion. In such a set-up, the neural agent continuously receives sensory stimuli from the environment and provides motor signals that manipulate the environment or move the agent within it. So far, most studies requiring such functionality have been conducted with custom simulation scripts and manually implemented tasks. This makes it difficult for other researchers to reproduce and build upon previous work and nearly impossible to compare the performance of different learning architectures. In this work, we present a novel approach to solve this problem, connecting benchmark tools from the field of machine learning and state-of-the-art neural network simulators from computational neuroscience. The resulting toolchain enables researchers in both fields to make use of well-tested high-performance simulation software supporting biologically plausible neuron, synapse and network models and allows them to evaluate and compare their approach on the basis of standardized environments with various levels of complexity. We demonstrate the functionality of the toolchain by implementing a neuronal actor-critic architecture for reinforcement learning in the NEST simulator and successfully training it on two different environments from the OpenAI Gym. We compare its performance to a previously suggested neural network model of reinforcement learning in the basal ganglia and a generic Q-learning algorithm.

Classification:

Contributing Institute(s):
  1. Computational and Systems Neuroscience (INM-6)
  2. Theoretical Neuroscience (IAS-6)
  3. Jara-Institut Brain structure-function relationships (INM-10)
Research Program(s):
  1. 574 - Theory, modelling and simulation (POF3-574) (POF3-574)
  2. RL-BRD-J - Neural network mechanisms of reinforcement learning (BMBF-01GQ1343) (BMBF-01GQ1343)
  3. W2Morrison - W2/W3 Professorinnen Programm der Helmholtzgemeinschaft (B1175.01.12) (B1175.01.12)
  4. SMHB - Supercomputing and Modelling for the Human Brain (HGF-SMHB-2013-2017) (HGF-SMHB-2013-2017)
  5. HBP SGA2 - Human Brain Project Specific Grant Agreement 2 (785907) (785907)
  6. HBP SGA1 - Human Brain Project Specific Grant Agreement 1 (720270) (720270)

Appears in the scientific report 2019
Database coverage:
Medline ; Creative Commons Attribution CC BY 4.0 ; DOAJ ; OpenAccess ; BIOSIS Previews ; Clarivate Analytics Master Journal List ; DOAJ Seal ; IF < 5 ; JCR ; PubMed Central ; SCOPUS ; Science Citation Index Expanded ; Web of Science Core Collection
Click to display QR Code for this record

The record appears in these collections:
Document types > Articles > Journal Article
Institute Collections > INM > INM-10
Institute Collections > IAS > IAS-6
Institute Collections > INM > INM-6
Workflow collections > Public records
Workflow collections > Publication Charges
Publications database
Open Access

 Record created 2019-07-16, last modified 2024-03-13