Home > Publications database > Optimizing Distributed Deep Learning in Heterogeneous Computing Platforms for Remote Sensing Data Classification |
Contribution to a conference proceedings | FZJ-2023-00122 |
; ; ; ;
2022
IEEE
This record in other databases:
Please use a persistent id in citations: http://hdl.handle.net/2128/33404 doi:10.1109/IGARSS46834.2022.9883762
Abstract: Applications from Remote Sensing (RS) unveiled unique challenges to Deep Learning (DL) due to the high volume and complexity of their data. On the one hand, deep neural network architectures have the capability to automatically ex-tract informative features from RS data. On the other hand, these models have massive amounts of tunable parameters, requiring high computational capabilities. Distributed DL with data parallelism on High-Performance Computing (HPC) systems have proved necessary in dealing with the demands of DL models. Nevertheless, a single HPC system can be al-ready highly heterogeneous and include different computing resources with uneven processing power. In this context, a standard data parallelism strategy does not partition the data efficiently according to the available computing resources. This paper proposes an alternative approach to compute the gradient, which guarantees that the contribution to the gradient calculation is proportional to the processing speed of each DL model's replica. The experimental results are obtained in a heterogeneous HPC system with RS data and demonstrate that the proposed approach provides a significant training speed up and gain in the global accuracy compared to one of the state-of-the-art distributed DL framework.
![]() |
The record appears in these collections: |