Home > Publications database > Parallel and Scalable Deep Learning to Reconstruct Actuated Turbulent Boundary Layer Flows. Part II: Autoencoder Training on HPC Systems |
Contribution to a conference proceedings | FZJ-2023-02167 |
; ; ; ; ;
2022
Please use a persistent id in citations: http://hdl.handle.net/2128/34556
Abstract: Convolutional autoencoders are trained on exceptionally large actuated turbulent boundary layer simulation data (8.3 TB) on the high-performance computer JUWELS at the J\"ulich Supercomputing Centre. The parallelization of the training is based on a distributed data-parallelism approach. This method relies on distributing the training dataset to multiple workers, where the trainable parameters of the convolutional autoencoder network are occasionally exchanged between the workers. This allows the training times to be drastically reduced - almost linear scaling performance can be achieved by increasing the number of workers (up to 2,048 GPUs). As a consequence of this increase, the total batch size also increases. This directly affects the training accuracy and hence, the quality of the trained network. The training error, computed between the reference and the reconstructed turbulent boundary layer fields, becomes larger when the number of workers is increased. This behavior needs to be taken care of especially when going to a large number of workers, i.e., a compromise between parallel speed and accuracy needs to be found.
![]() |
The record appears in these collections: |