001045002 001__ 1045002
001045002 005__ 20251104202045.0
001045002 0247_ $$2doi$$a10.1016/j.future.2025.108042
001045002 0247_ $$2ISSN$$a0167-739X
001045002 0247_ $$2ISSN$$a1872-7115
001045002 0247_ $$2datacite_doi$$a10.34734/FZJ-2025-03484
001045002 037__ $$aFZJ-2025-03484
001045002 041__ $$aEnglish
001045002 082__ $$a004
001045002 1001_ $$0P:(DE-Juel1)180916$$aAach, Marcel$$b0$$eCorresponding author
001045002 245__ $$aResource-adaptive successive doubling for hyperparameter optimization with large datasets on high-performance computing systems
001045002 260__ $$aAmsterdam [u.a.]$$bElsevier Science$$c2026
001045002 3367_ $$2DRIVER$$aarticle
001045002 3367_ $$2DataCite$$aOutput Types/Journal article
001045002 3367_ $$0PUB:(DE-HGF)16$$2PUB:(DE-HGF)$$aJournal Article$$bjournal$$mjournal$$s1762265627_19022
001045002 3367_ $$2BibTeX$$aARTICLE
001045002 3367_ $$2ORCID$$aJOURNAL_ARTICLE
001045002 3367_ $$00$$2EndNote$$aJournal Article
001045002 520__ $$aThe accuracy of Machine Learning (ML) models is highly dependent on the hyperparameters that have to be chosen by the user before the training. However, finding the optimal set of hyperparameters is a complex process, as many different parameter combinations need to be evaluated, and obtaining the accuracy of each combination usually requires a full training run. It is therefore of great interest to reduce the computational runtime of this process. On High-Performance Computing (HPC) systems, several configurations can be evaluated in parallel to speed up this Hyperparameter Optimization (HPO). State-of-the-art HPO methods follow a bandit-based approach and build on top of successive halving, where the final performance of a combination is estimated based on a lower than fully trained fidelity performance metric and more promising combinations are assigned more resources over time. Frequently, the number of epochs is treated as a resource, letting more promising combinations train longer. Another option is to use the number of workers as a resource and directly allocate more workers to more promising configurations via data-parallel training. This article proposes a novel Resource-Adaptive Successive Doubling Algorithm (RASDA), which combines a resource- adaptive successive doubling scheme with the plain Asynchronous Successive Halving Algorithm (ASHA). Scalability of this approach is shown on up to 1,024 Graphics Processing Units (GPUs) on modern HPC systems. It is applied to different types of Neural Networks (NNs) and trained on large datasets from the Computer Vision (CV), Computational Fluid Dynamics (CFD), and Additive Manufacturing (AM) domains, where performing more than one full training run is usually infeasible. Empirical results show that RASDA outperforms ASHA by a factor of up to 1.9 with respect to the runtime. At the same time, the solution quality of final ASHA models is maintained or even surpassed by the implicit batch size scheduling of RASDA. With RASDA, systematic HPO is applied to a terabyte-scale scientific dataset for the first time in the literature, enabling efficient optimization of complex models on massive scientific data.
001045002 536__ $$0G:(DE-HGF)POF4-5111$$a5111 - Domain-Specific Simulation & Data Life Cycle Labs (SDLs) and Research Groups (POF4-511)$$cPOF4-511$$fPOF IV$$x0
001045002 536__ $$0G:(EU-Grant)951733$$aRAISE - Research on AI- and Simulation-Based Engineering at Exascale (951733)$$c951733$$fH2020-INFRAEDI-2019-1$$x1
001045002 588__ $$aDataset connected to CrossRef, Journals: juser.fz-juelich.de
001045002 7001_ $$0P:(DE-Juel1)188513$$aSarma, Rakesh$$b1
001045002 7001_ $$0P:(DE-HGF)0$$aNeukirchen, Helmut$$b2
001045002 7001_ $$0P:(DE-Juel1)132239$$aRiedel, Morris$$b3$$ufzj
001045002 7001_ $$0P:(DE-Juel1)165948$$aLintermann, Andreas$$b4
001045002 773__ $$0PERI:(DE-600)2020551-X$$a10.1016/j.future.2025.108042$$gVol. 175, p. 108042 -$$p108042 -$$tFuture generation computer systems$$v175$$x0167-739X$$y2026
001045002 8564_ $$uhttps://juser.fz-juelich.de/record/1045002/files/1-s2.0-S0167739X25003371-main.pdf$$yOpenAccess
001045002 909CO $$ooai:juser.fz-juelich.de:1045002$$popenaire$$popen_access$$pdriver$$pVDB$$pec_fundedresources$$pdnbdelivery
001045002 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)180916$$aForschungszentrum Jülich$$b0$$kFZJ
001045002 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)188513$$aForschungszentrum Jülich$$b1$$kFZJ
001045002 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)132239$$aForschungszentrum Jülich$$b3$$kFZJ
001045002 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)165948$$aForschungszentrum Jülich$$b4$$kFZJ
001045002 9131_ $$0G:(DE-HGF)POF4-511$$1G:(DE-HGF)POF4-510$$2G:(DE-HGF)POF4-500$$3G:(DE-HGF)POF4$$4G:(DE-HGF)POF$$9G:(DE-HGF)POF4-5111$$aDE-HGF$$bKey Technologies$$lEngineering Digital Futures – Supercomputing, Data Management and Information Security for Knowledge and Action$$vEnabling Computational- & Data-Intensive Science and Engineering$$x0
001045002 915__ $$0StatID:(DE-HGF)0200$$2StatID$$aDBCoverage$$bSCOPUS$$d2024-12-17
001045002 915__ $$0StatID:(DE-HGF)0300$$2StatID$$aDBCoverage$$bMedline$$d2024-12-17
001045002 915__ $$0StatID:(DE-HGF)1160$$2StatID$$aDBCoverage$$bCurrent Contents - Engineering, Computing and Technology$$d2024-12-17
001045002 915__ $$0LIC:(DE-HGF)CCBY4$$2HGFVOC$$aCreative Commons Attribution CC BY 4.0
001045002 915__ $$0StatID:(DE-HGF)0100$$2StatID$$aJCR$$bFUTURE GENER COMP SY : 2022$$d2024-12-17
001045002 915__ $$0StatID:(DE-HGF)0113$$2StatID$$aWoS$$bScience Citation Index Expanded$$d2024-12-17
001045002 915__ $$0StatID:(DE-HGF)0150$$2StatID$$aDBCoverage$$bWeb of Science Core Collection$$d2024-12-17
001045002 915__ $$0StatID:(DE-HGF)0510$$2StatID$$aOpenAccess
001045002 915__ $$0StatID:(DE-HGF)9905$$2StatID$$aIF >= 5$$bFUTURE GENER COMP SY : 2022$$d2024-12-17
001045002 915__ $$0StatID:(DE-HGF)0160$$2StatID$$aDBCoverage$$bEssential Science Indicators$$d2024-12-17
001045002 915__ $$0StatID:(DE-HGF)0199$$2StatID$$aDBCoverage$$bClarivate Analytics Master Journal List$$d2024-12-17
001045002 920__ $$lyes
001045002 9201_ $$0I:(DE-Juel1)JSC-20090406$$kJSC$$lJülich Supercomputing Center$$x0
001045002 980__ $$ajournal
001045002 980__ $$aVDB
001045002 980__ $$aUNRESTRICTED
001045002 980__ $$aI:(DE-Juel1)JSC-20090406
001045002 9801_ $$aFullTexts