Home > Publications database > Learning to Learn on High Performance Computing |
Poster (After Call) | FZJ-2019-05385 |
; ; ; ; ; ; ;
2019
Please use a persistent id in citations: http://hdl.handle.net/2128/23250
Abstract: The simulation of biological neural networks (BNN) is essential to neuroscience. The complexity of the brain's structure and activity combined with the practical limits of in-vivo measurements have led to the development of computational models which allow us to decompose, analyze and understand its elements and their interactions.Impressive progress has recently been made in non-spiking but brain-like learning capabilities in ANNs [1, 3]. A substantial part of this progress arises from computing-intense learning-to-learn (L2L) [2, 4, 5] or meta-learning methods. L2L is a specific algorithm for acquiring constraints to improve learning performance. L2L can be decomposed into an optimizee program (such as a Kalman filter) which learns specific tasks and an optimizer algorithm which searches for generalized hyperparameters for the optimizee. The optimizer learns to improve the optimizee’s performance over distinct tasks as measured by a fitness function (Fig 1).We have developed an implementation of L2L on High Performance Computing (HPC) [6] for hyperparameter optimization of spiking BNNs as well as hyperparameter search for general neuroscientific analytics. This tool takes advantage of large-scale parallelization by deploying an ensemble of optimizees to understand and analyze mathematical models of BNNs. Improved performance for structural plasticity has been found in NEST simulations comparing several techniques including gradient descent, cross entropy, and evolutionary strategies.
![]() |
The record appears in these collections: |