Conference Presentation (After Call) FZJ-2019-04543

http://join2-wiki.gsi.de/foswiki/pub/Main/Artwork/join2_logo100x88.png
Evaluating neural network models within a formal validation framework

 ;  ;  ;  ;  ;

2019

INCF Neuroinformatics, WarsawWarsaw, Poland, 1 Sep 2019 - 2 Sep 20192019-09-012019-09-02

Abstract: To bridge the gap between the theory of neuronal networks and findings obtained by the analysis of experimental data, advances in computational neuroscience rely heavily on simulations of neuronal network models. The verification of model implementations and validation of its simulation results is thus an indispensable part of any simulation workflow. Moreover, in face of the heterogeneity of models and simulators, approaches to enable the comparison between model implementations is an issue of increasing importance which calls for the establishment of a formalized validation scheme. Although the bottom-up validation of cell response properties is important, it does not automatically entail the validity of the simulation dynamics on the network scale. Here, we discuss a set of tests to assess the network dynamics to attain a quantified level of agreement with a given reference.We developed NetworkUnit (RRID:SCR_016543; github.com/INM-6/NetworkUnit) as a Python library, built on top of the SciUnit (RRID:SCR_014528) framework [1,2], as formal implementation of this validation process for network-level validation testing, which complements NeuronUnit (RRID:SCR_015634) for the single cell level. The toolbox Elephant (RRID:SCR_003833) provides the foundation to extract well-defined and comparable features of the network dynamics. We demonstrate the use of the library in a validation testing workflow [3,4] using a worked example involving the SpiNNaker neuromorphic system.References1 Omar, C., Aldrich, J., and Gerkin, R. C. (2014). “Collaborative infrastructure for test-driven scientific model validation”. Companion Proceedings of the 36th International Conference on Software Engineering - ICSE Companion 2014, pages 524–527., 10.1145/2591062.25911292 Sarma, G. P., Jacobs, T. W., Watts, M. D., Ghayoomie, S. V., Larson, S. D., and Gerkin, R. C. (2016). “Unit testing, model validation, and biological simulation”. F1000 Research, 5:1946., 10.12688/f1000research.9315.13 Gutzen, R., von Papen M., Trensch G., Quaglio P., Grün S., and Denker M. (2018). “Reproducible Neural Network Simulations: Statistical Methods for Model Validation on the Level of Network Activity Data”. Frontiers in Neuroinformatics 12:90., 10.3389/fninf.2018.000904 Trensch, G., Gutzen R., Blundell I., Denker M., and Morrison A. (2018). “Rigorous Neural Network Simulations: A Model Substantiation Methodology for Increasing the Correctness of Simulation Results in the Absence of Experimental Validation Data”. Frontiers in Neuroinformatics 12:81., 10.3389/fninf.2018.00081


Contributing Institute(s):
  1. Computational and Systems Neuroscience (INM-6)
  2. Theoretical Neuroscience (IAS-6)
  3. Jara-Institut Brain structure-function relationships (INM-10)
Research Program(s):
  1. 571 - Connectivity and Activity (POF3-571) (POF3-571)
  2. 574 - Theory, modelling and simulation (POF3-574) (POF3-574)
  3. HBP SGA2 - Human Brain Project Specific Grant Agreement 2 (785907) (785907)

Appears in the scientific report 2019
Click to display QR Code for this record

The record appears in these collections:
Document types > Presentations > Conference Presentations
Institute Collections > INM > INM-10
Institute Collections > IAS > IAS-6
Institute Collections > INM > INM-6
Workflow collections > Public records
Publications database

 Record created 2019-09-04, last modified 2024-03-13



Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)