%0 Conference Paper
%A Yegenoglu, Alper
%A Diaz, Sandra
%A Klijn, Wouter
%A Peyser, Alexander
%A Subramoney, Anand
%A Maas, Wolfgang
%A Visconti, Giuseppe
%A Herty, Michael
%T Learning to Learn on High Performance Computing
%M FZJ-2019-05385
%D 2019
%X The simulation of biological neural networks (BNN) is essential to neuroscience. The complexity of the brain's structure and activity combined with the practical limits of in-vivo measurements have led to the development of computational models which allow us to decompose, analyze and understand its elements and their interactions.Impressive progress has recently been made in non-spiking but brain-like learning capabilities in ANNs [1, 3]. A substantial part of this progress arises from computing-intense learning-to-learn (L2L) [2, 4, 5] or meta-learning methods. L2L is a specific algorithm for acquiring constraints to improve learning performance. L2L can be decomposed into an optimizee program (such as a Kalman filter) which learns specific tasks and an optimizer algorithm which searches for generalized hyperparameters for the optimizee. The optimizer learns to improve the optimizee’s performance over distinct tasks as measured by a fitness function (Fig 1).We have developed an implementation of L2L on High Performance Computing (HPC) [6] for hyperparameter optimization of spiking BNNs as well as hyperparameter search for general neuroscientific analytics. This tool takes advantage of large-scale parallelization by deploying an ensemble of optimizees to understand and analyze mathematical models of BNNs. Improved performance for structural plasticity has been found in NEST simulations comparing several techniques including gradient descent, cross entropy, and evolutionary strategies.
%B Society for Neuroscience Meeting 2019
%C 19 Oct 2019 - 23 Oct 2019, Chicago (USA)
Y2 19 Oct 2019 - 23 Oct 2019
M2 Chicago, USA
%F PUB:(DE-HGF)24
%9 Poster
%U https://juser.fz-juelich.de/record/866218