Home > Publications database > Ensemble Kalman Filter Optimizing Deep Neural Networks: An Alternative Approach to Non-performing Gradient Descent |
Contribution to a conference proceedings/Contribution to a book | FZJ-2021-00117 |
; ; ;
2020
Springer
Cham
This record in other databases:
Please use a persistent id in citations: http://hdl.handle.net/2128/26777 doi:10.1007/978-3-030-64580-9_7
Abstract: The successful training of deep neural networks is dependent on initialization schemes and choice of activation functions. Non-optimally chosen parameter settings lead to the known problem of exploding or vanishing gradients. This issue occurs when gradient descent and backpropagation are applied. For this setting the Ensemble Kalman Filter (EnKF) can be used as an alternative optimizer when training neural networks. The EnKF does not require the explicit calculation of gradients or adjoints and we show this resolves the exploding and vanishing gradient problem. We analyze different parameter initializations, propose a dynamic change in ensembles and compare results to established methods.
![]() |
The record appears in these collections: |