TY  - CONF
AU  - Rojas, Elvis
AU  - Knobloch, Michael
AU  - Daoud, Nour
AU  - Meneses, Esteban
AU  - Mohr, Bernd
TI  - Early Experiences of Noise-Sensitivity Performance Analysis of a Distributed Deep Learning Framework
PB  - IEEE
M1  - FZJ-2022-03912
SP  - 516-522
PY  - 2022
AB  - Deep Learning (DL) applications are used to solve complex problems efficiently. These applications require complex neural network models composed of millions of parameters and huge amounts of data for proper training. This is only possible by parallelizing the necessary computations by so-called distributed deep learning (DDL) frameworks over many GPUs distributed over multiple nodes of a HPC cluster. These frameworks mostly utilize the compute power of the GPUs and use only a small portion of the available compute power of the CPUs in the nodes for I/O and inter-process communication, leaving many CPU cores idle and unused. The more powerful the base CPU in the cluster nodes, the more compute resources are wasted. In this paper, we investigate how much of this unutilized compute resources could be used for executing other applications without lowering the performance of the DDL frameworks. In our experiments, we executed a noise-generation application, which generates a very-high memory, network or I/O load, in parallel with DDL frameworks, and use HPC profiling and tracing techniques to determine whether and how the generated noise is affecting the performance of the DDL frameworks. Early results indicate that it might be possible to utilize the idle cores for jobs of other users without affecting the performance of the DDL applications in a negative way.
T2  - 2022 IEEE International Conference on Cluster Computing
CY  - 6 Sep 2022 - 9 Sep 2022, Heidelberg (Germany)
Y2  - 6 Sep 2022 - 9 Sep 2022
M2  - Heidelberg, Germany
LB  - PUB:(DE-HGF)8
UR  - <Go to ISI:>//WOS:000920273100051
DO  - DOI:10.1109/CLUSTER51413.2022.00066
UR  - https://juser.fz-juelich.de/record/910530
ER  -