001015166 001__ 1015166
001015166 005__ 20240105202146.0
001015166 020__ $$a978-3-95806-719-6
001015166 0247_ $$2datacite_doi$$a10.34734/FZJ-2023-03569
001015166 037__ $$aFZJ-2023-03569
001015166 041__ $$aEnglish
001015166 1001_ $$0P:(DE-Juel1)161462$$aYegenoglu, Alper$$b0$$eCorresponding author
001015166 245__ $$aGradient-Free Optimization of Artificial and Biological Networks using Learning to Learn$$f - 2023-06-23
001015166 260__ $$aJülich$$bForschungszentrum Jülich GmbH Zentralbibliothek, Verlag$$c2023
001015166 300__ $$a149
001015166 3367_ $$2DataCite$$aOutput Types/Dissertation
001015166 3367_ $$0PUB:(DE-HGF)3$$2PUB:(DE-HGF)$$aBook$$mbook
001015166 3367_ $$2ORCID$$aDISSERTATION
001015166 3367_ $$2BibTeX$$aPHDTHESIS
001015166 3367_ $$02$$2EndNote$$aThesis
001015166 3367_ $$0PUB:(DE-HGF)11$$2PUB:(DE-HGF)$$aDissertation / PhD Thesis$$bphd$$mphd$$s1704443976_25361
001015166 3367_ $$2DRIVER$$adoctoralThesis
001015166 4900_ $$aSchriften des Forschungszentrums Jülich IAS Series$$v55
001015166 502__ $$aDissertation, RWTH Aachen University, 2023$$bDissertation$$cRWTH Aachen University$$d2023
001015166 520__ $$aUnderstanding intelligence and how it allows humans to learn, to make decision and form memories, is a long-lasting quest in neuroscience. Our brain is formed by networks of neurons and other cells, however, it is not clear how those networks are trained to learn to solve specific tasks. In machine learning and artificial intelligence it is common to train and optimize neural networks with gradient descent and backpropagation. How to transfer this optimization strategy to biological, spiking networks (SNNs) is still a matter of research. Due to the binary communication scheme between neurons of an SNN via spikes, a direct application of gradient descent and backpropagation is not possible without further approximations. In my work, I present gradient-free optimization techniques that are directly applicable to artificial and biological neural networks. I utilize metaheuristics, such as genetic algorithms and the ensemble Kalman Filter, to optimize network parameters and train networks to learnto solve specific tasks. The optimization is embedded into the concept of meta-learning and learning to learn respectively. The learning to learn concept consists of a two loop optimization procedure. In the first, inner loop the algorithm or network is trained on a family of tasks, and in the second, outer loop the hyper-parameters and parameters of the network are optimized. First, I apply the EnKF on a convolution neural network, resulting in high accuracy when classifying digits. Then, I employ the same optimization procedure on a spiking reservoir network within the L2L framework. The L2L framework, an implementation of the learning to learn concept, allows me to easily deploy and execute multiple instances of the network in parallel on high performance computing systems. In order to understand how the network learning evolves, I analyze the connection weights over multiple generations and investigate a covariance matrix of the EnKF in the principle component space. The analysis not only shows the convergence behaviour of the optimization process, but also how sampling techniques influence the optimization procedure. Next, I embed the EnKF into the L2L inner loop and adapt the hyper-parameters of the optimizer using a genetic algorithm (GA). In contrast to the manual parameter setting, the GA suggests an alternative configuration. Finally, I present an ant colony simulation foraging for food while being steered by SNNs. While training the network, self-coordination and self-organization in the colony emerges. I employ various analysis methods to better understand the ants’ behaviour. With my work I leverage optimization for different scientific domains utilizing meta-learning and illustrate how gradient-free optimization can be applied on biological and artificial networks.
001015166 536__ $$0G:(DE-HGF)POF4-5111$$a5111 - Domain-Specific Simulation & Data Life Cycle Labs (SDLs) and Research Groups (POF4-511)$$cPOF4-511$$fPOF IV$$x0
001015166 536__ $$0G:(EU-Grant)945539$$aHBP SGA3 - Human Brain Project Specific Grant Agreement 3 (945539)$$c945539$$fH2020-SGA-FETFLAG-HBP-2019$$x1
001015166 536__ $$0G:(DE-Juel1)JL SMHB-2021-2027$$aJL SMHB - Joint Lab Supercomputing and Modeling for the Human Brain (JL SMHB-2021-2027)$$cJL SMHB-2021-2027$$x2
001015166 536__ $$0G:(DE-Juel1)HDS-LEE-20190612$$aHDS LEE - Helmholtz School for Data Science in Life, Earth and Energy (HDS LEE) (HDS-LEE-20190612)$$cHDS-LEE-20190612$$x3
001015166 536__ $$0G:(DE-Juel1)CSD-SSD-20190612$$aCSD-SSD - Center for Simulation and Data Science (CSD) - School for Simulation and Data Science (SSD) (CSD-SSD-20190612)$$cCSD-SSD-20190612$$x4
001015166 536__ $$0G:(DE-Juel1)PHD-NO-GRANT-20170405$$aPhD no Grant - Doktorand ohne besondere Förderung (PHD-NO-GRANT-20170405)$$cPHD-NO-GRANT-20170405$$x5
001015166 536__ $$0G:(DE-Juel1)Helmholtz-SLNS$$aSLNS - SimLab Neuroscience (Helmholtz-SLNS)$$cHelmholtz-SLNS$$x6
001015166 8564_ $$uhttps://juser.fz-juelich.de/record/1015166/files/IAS_55_Yegenoglu_Alper.pdf$$yOpenAccess
001015166 909CO $$ooai:juser.fz-juelich.de:1015166$$pdnbdelivery$$pec_fundedresources$$popenaire$$pVDB$$pdriver$$popen_access
001015166 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)161462$$aForschungszentrum Jülich$$b0$$kFZJ
001015166 9131_ $$0G:(DE-HGF)POF4-511$$1G:(DE-HGF)POF4-510$$2G:(DE-HGF)POF4-500$$3G:(DE-HGF)POF4$$4G:(DE-HGF)POF$$9G:(DE-HGF)POF4-5111$$aDE-HGF$$bKey Technologies$$lEngineering Digital Futures – Supercomputing, Data Management and Information Security for Knowledge and Action$$vEnabling Computational- & Data-Intensive Science and Engineering$$x0
001015166 9141_ $$y2023
001015166 915__ $$0StatID:(DE-HGF)0510$$2StatID$$aOpenAccess
001015166 915__ $$0LIC:(DE-HGF)CCBY4$$2HGFVOC$$aCreative Commons Attribution CC BY 4.0
001015166 920__ $$lyes
001015166 9201_ $$0I:(DE-Juel1)JSC-20090406$$kJSC$$lJülich Supercomputing Center$$x0
001015166 980__ $$aphd
001015166 980__ $$aVDB
001015166 980__ $$abook
001015166 980__ $$aI:(DE-Juel1)JSC-20090406
001015166 980__ $$aUNRESTRICTED
001015166 9801_ $$aFullTexts