001037349 001__ 1037349
001037349 005__ 20250203124526.0
001037349 0247_ $$2doi$$a10.1016/j.jocs.2024.102229
001037349 0247_ $$2WOS$$aWOS:001185868400001
001037349 037__ $$aFZJ-2025-00662
001037349 082__ $$a004
001037349 1001_ $$0P:(DE-HGF)0$$aFriedemann, Sebastian$$b0$$eCorresponding author
001037349 245__ $$aDynamic load/propagate/store for data assimilation with particle filters on supercomputers
001037349 260__ $$aAmsterdam [u.a.]$$bElsevier$$c2024
001037349 3367_ $$2DRIVER$$aarticle
001037349 3367_ $$2DataCite$$aOutput Types/Journal article
001037349 3367_ $$0PUB:(DE-HGF)16$$2PUB:(DE-HGF)$$aJournal Article$$bjournal$$mjournal$$s1737022774_7816
001037349 3367_ $$2BibTeX$$aARTICLE
001037349 3367_ $$2ORCID$$aJOURNAL_ARTICLE
001037349 3367_ $$00$$2EndNote$$aJournal Article
001037349 520__ $$aSeveral ensemble-based Data Assimilation (DA) methods rely on a propagate/update cycle, where a potentially compute intensive simulation code propagates multiple states for several consecutive time steps, that are then analyzed to update the states to be propagated for the next cycle. In this paper we focus on DA methods where the update can be computed by gathering only lightweight data obtained independently from each of the propagated states. This encompasses particle filters where one weight is computed from each state, but also methods like Approximate Bayesian Computation (ABC) or Markov Chain Monte Carlo (MCMC). Such methods can be very compute intensive and running efficiently at scale on supercomputers is challenging. This paper proposes a framework based on an elastic and fault-tolerant runner/server architecture minimizing data movements while enabling dynamic load balancing. Our approach relies on runners that load, propagate and store particles from an asynchronously managed distributed particle cache permitting particles to move from one runner to another in the background while particle propagation proceeds. The framework is validated with a bootstrap particle filter with the WRF simulation code. We handle up to 2555 particles on 20,442 compute cores. Compared to a file-based implementation, our solution spends up to 2.84 less resources (cores×seconds) per particle.
001037349 536__ $$0G:(DE-HGF)POF4-5111$$a5111 - Domain-Specific Simulation & Data Life Cycle Labs (SDLs) and Research Groups (POF4-511)$$cPOF4-511$$fPOF IV$$x0
001037349 536__ $$0G:(EU-Grant)824158$$aEoCoE-II - Energy Oriented Center of Excellence : toward exascale for energy (824158)$$c824158$$fH2020-INFRAEDI-2018-1$$x1
001037349 7001_ $$0P:(DE-HGF)0$$aKeller, Kai$$b1
001037349 7001_ $$0P:(DE-Juel1)164851$$aLu, Yen-Sen$$b2$$ufzj
001037349 7001_ $$0P:(DE-HGF)0$$aRaffin, Bruno$$b3
001037349 7001_ $$0P:(DE-HGF)0$$aBautista-Gomez, Leonardo$$b4
001037349 773__ $$0PERI:(DE-600)2557360-3$$a10.1016/j.jocs.2024.102229$$p102229$$tJournal of computational science$$v76$$x1877-7503$$y2024
001037349 8564_ $$uhttps://juser.fz-juelich.de/record/1037349/files/Dynamic%20load_propagate_store%20for%20data%20assimilation%20with%20particle%20filters%20on%20supercomputers.pdf$$yRestricted
001037349 909CO $$ooai:juser.fz-juelich.de:1037349$$pec_fundedresources$$pVDB$$popenaire
001037349 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)164851$$aForschungszentrum Jülich$$b2$$kFZJ
001037349 915__ $$0StatID:(DE-HGF)0100$$2StatID$$aJCR$$bJ COMPUT SCI-NETH : 2022$$d2025-01-07
001037349 915__ $$0StatID:(DE-HGF)0200$$2StatID$$aDBCoverage$$bSCOPUS$$d2025-01-07
001037349 915__ $$0StatID:(DE-HGF)0300$$2StatID$$aDBCoverage$$bMedline$$d2025-01-07
001037349 915__ $$0StatID:(DE-HGF)0199$$2StatID$$aDBCoverage$$bClarivate Analytics Master Journal List$$d2025-01-07
001037349 915__ $$0StatID:(DE-HGF)1160$$2StatID$$aDBCoverage$$bCurrent Contents - Engineering, Computing and Technology$$d2025-01-07
001037349 915__ $$0StatID:(DE-HGF)0160$$2StatID$$aDBCoverage$$bEssential Science Indicators$$d2025-01-07
001037349 915__ $$0StatID:(DE-HGF)0113$$2StatID$$aWoS$$bScience Citation Index Expanded$$d2025-01-07
001037349 915__ $$0StatID:(DE-HGF)0150$$2StatID$$aDBCoverage$$bWeb of Science Core Collection$$d2025-01-07
001037349 915__ $$0StatID:(DE-HGF)9900$$2StatID$$aIF < 5$$d2025-01-07
001037349 9131_ $$0G:(DE-HGF)POF4-511$$1G:(DE-HGF)POF4-510$$2G:(DE-HGF)POF4-500$$3G:(DE-HGF)POF4$$4G:(DE-HGF)POF$$9G:(DE-HGF)POF4-5111$$aDE-HGF$$bKey Technologies$$lEngineering Digital Futures – Supercomputing, Data Management and Information Security for Knowledge and Action$$vEnabling Computational- & Data-Intensive Science and Engineering$$x0
001037349 9141_ $$y2024
001037349 920__ $$lyes
001037349 9201_ $$0I:(DE-Juel1)JSC-20090406$$kJSC$$lJülich Supercomputing Center$$x0
001037349 980__ $$ajournal
001037349 980__ $$aVDB
001037349 980__ $$aI:(DE-Juel1)JSC-20090406
001037349 980__ $$aUNRESTRICTED