TY - CONF
AU - Inanc, Eray
AU - Albers, Marian
AU - Sarma, Rakesh
AU - Aach, Marcel
AU - Schröder, Wolfgang
AU - Lintermann, Andreas
TI - Parallel and Scalable Deep Learning to Reconstruct Actuated Turbulent Boundary Layer Flows. Part II: Autoencoder Training on HPC Systems
M1 - FZJ-2023-02167
SP - 4 pages
PY - 2022
AB - Convolutional autoencoders are trained on exceptionally large actuated turbulent boundary layer simulation data (8.3 TB) on the high-performance computer JUWELS at the J\"ulich Supercomputing Centre. The parallelization of the training is based on a distributed data-parallelism approach. This method relies on distributing the training dataset to multiple workers, where the trainable parameters of the convolutional autoencoder network are occasionally exchanged between the workers. This allows the training times to be drastically reduced - almost linear scaling performance can be achieved by increasing the number of workers (up to 2,048 GPUs). As a consequence of this increase, the total batch size also increases. This directly affects the training accuracy and hence, the quality of the trained network. The training error, computed between the reference and the reconstructed turbulent boundary layer fields, becomes larger when the number of workers is increased. This behavior needs to be taken care of especially when going to a large number of workers, i.e., a compromise between parallel speed and accuracy needs to be found.
T2 - 33rd International Conference on Parallel Computational Fluid Dynamics
CY - 25 May 2022 - 27 May 2022, Alba (Italy)
Y2 - 25 May 2022 - 27 May 2022
M2 - Alba, Italy
LB - PUB:(DE-HGF)8
UR - https://juser.fz-juelich.de/record/1007693
ER -