Book/Dissertation / PhD Thesis FZJ-2025-02982

http://join2-wiki.gsi.de/foswiki/pub/Main/Artwork/join2_logo100x88.png
Parallel and Scalable Hyperparameter Optimization for Distributed Deep Learning Methods on High-Performance Computing Systems



2025

ISBN: 978-9935-9807-8-6

172p () [10.34734/FZJ-2025-02982] = Dissertation, University of Iceland, 2025

This record in other databases:

Please use a persistent id in citations: doi:

Abstract: The design of Deep Learning (DL) models is a complex task, involving decisions on the general architecture of the model (e.g., the number of layers of the Neural Network (NN)) and on the optimization algorithms (e.g., the learning rate). These so-called hyperparameters significantly influence the performance (e.g., accuracy or error rates) of the final DL model and are, therefore, of great importance. However, optimizing these hyperparameters is a computationally intensive process due to the necessity of evaluating many combinations to identify the best-performing ones. Often, the optimization is manually performed. This Ph.D. thesis leverages the power of High-Performance Computing (HPC) systems to perform automatic and efficient Hyperparameter Optimization (HPO) for DL models that are trained on large quantities of scientific data. On modern HPO systems, equipped with a high number of Graphics Processing Units (GPUs), it becomes possible to not only evaluate multiple models with different hyperparameter combinations in parallel but also to distribute the training of the models themselves to multiple GPUs. State-of-the-art HPO methods, based on the concepts of early stopping, have demonstrated significant reductions in the runtime of the HPO process. Their performance at scale, particularly in the context of HPC environments and when applied to large scientific datasets, has remained unexplored. This thesis thus researches parallel and scalable HPO methods that leverage new inherent capabilities of HPC systems and innovative workflows incorporating novel computing paradigms. The developed HPO methods are validated on different scientific datasets ranging from the Computational Fluid Dynamics (CFD) to Remote Sensing (RS) domain, spanning multiple hundred Gigabytes (GBs) to several Terabytes (TBs) in size.


Note: Additional Grant: Verbundprojekt: NXTAIM - NXT GEN (01.01.2024-31.12.2026)
Note: Dissertation, University of Iceland, 2025

Contributing Institute(s):
  1. Jülich Supercomputing Center (JSC)
Research Program(s):
  1. 5111 - Domain-Specific Simulation & Data Life Cycle Labs (SDLs) and Research Groups (POF4-511) (POF4-511)
  2. RAISE - Research on AI- and Simulation-Based Engineering at Exascale (951733) (951733)
  3. nxtAIM - nxtAIM – NXT GEN AI Methods (19A23014l) (19A23014l)

Appears in the scientific report 2025
Database coverage:
OpenAccess
Click to display QR Code for this record

The record appears in these collections:
Dokumenttypen > Hochschulschriften > Doktorarbeiten
Dokumenttypen > Bücher > Bücher
Workflowsammlungen > Öffentliche Einträge
Institutssammlungen > JSC
Publikationsdatenbank
Open Access

 Datensatz erzeugt am 2025-07-07, letzte Änderung am 2025-07-24


OpenAccess:
Volltext herunterladen PDF
Externer link:
Volltext herunterladenVolltext
Dieses Dokument bewerten:

Rate this document:
1
2
3
 
(Bisher nicht rezensiert)