TY - CONF AU - Yegenoglu, Alper AU - Diaz, Sandra AU - Klijn, Wouter AU - Peyser, Alexander AU - Subramoney, Anand AU - Maas, Wolfgang AU - Visconti, Giuseppe AU - Herty, Michael TI - Learning to Learn on High Performance Computing M1 - FZJ-2019-05385 PY - 2019 AB - The simulation of biological neural networks (BNN) is essential to neuroscience. The complexity of the brain's structure and activity combined with the practical limits of in-vivo measurements have led to the development of computational models which allow us to decompose, analyze and understand its elements and their interactions.Impressive progress has recently been made in non-spiking but brain-like learning capabilities in ANNs [1, 3]. A substantial part of this progress arises from computing-intense learning-to-learn (L2L) [2, 4, 5] or meta-learning methods. L2L is a specific algorithm for acquiring constraints to improve learning performance. L2L can be decomposed into an optimizee program (such as a Kalman filter) which learns specific tasks and an optimizer algorithm which searches for generalized hyperparameters for the optimizee. The optimizer learns to improve the optimizee’s performance over distinct tasks as measured by a fitness function (Fig 1).We have developed an implementation of L2L on High Performance Computing (HPC) [6] for hyperparameter optimization of spiking BNNs as well as hyperparameter search for general neuroscientific analytics. This tool takes advantage of large-scale parallelization by deploying an ensemble of optimizees to understand and analyze mathematical models of BNNs. Improved performance for structural plasticity has been found in NEST simulations comparing several techniques including gradient descent, cross entropy, and evolutionary strategies. T2 - Society for Neuroscience Meeting 2019 CY - 19 Oct 2019 - 23 Oct 2019, Chicago (USA) Y2 - 19 Oct 2019 - 23 Oct 2019 M2 - Chicago, USA LB - PUB:(DE-HGF)24 UR - https://juser.fz-juelich.de/record/866218 ER -