| Hauptseite > Publikationsdatenbank > Unlocking the compute continuum: Scaling out from cloud to HPC and HTC resources > print |
| 001 | 1052682 | ||
| 005 | 20260220104049.0 | ||
| 024 | 7 | _ | |a 10.1051/epjconf/202533701296 |2 doi |
| 024 | 7 | _ | |a 2100-014X |2 ISSN |
| 024 | 7 | _ | |a 2101-6275 |2 ISSN |
| 024 | 7 | _ | |a 10.34734/FZJ-2026-01050 |2 datacite_doi |
| 037 | _ | _ | |a FZJ-2026-01050 |
| 082 | _ | _ | |a 530 |
| 100 | 1 | _ | |a Ciangottini, Diego |0 P:(DE-HGF)0 |b 0 |e Corresponding author |
| 111 | 2 | _ | |a Conference on Computing in High Energy and Nuclear Physics |g CHEP 2024 |c Kraków |d 2024-10-19 - 2024-10-25 |w Poland |
| 245 | _ | _ | |a Unlocking the compute continuum: Scaling out from cloud to HPC and HTC resources |
| 260 | _ | _ | |a Les Ulis |c 2025 |b EDP Sciences |
| 300 | _ | _ | |a 8 p. |
| 336 | 7 | _ | |a CONFERENCE_PAPER |2 ORCID |
| 336 | 7 | _ | |a Conference Paper |0 33 |2 EndNote |
| 336 | 7 | _ | |a INPROCEEDINGS |2 BibTeX |
| 336 | 7 | _ | |a conferenceObject |2 DRIVER |
| 336 | 7 | _ | |a Output Types/Conference Paper |2 DataCite |
| 336 | 7 | _ | |a Contribution to a conference proceedings |b contrib |m contrib |0 PUB:(DE-HGF)8 |s 1769497240_24660 |2 PUB:(DE-HGF) |
| 336 | 7 | _ | |a Contribution to a book |0 PUB:(DE-HGF)7 |2 PUB:(DE-HGF) |m contb |
| 490 | 0 | _ | |a The European physical journal / Web of Conferences |
| 520 | _ | _ | |a In a geo-distributed computing infrastructure with heterogeneous resources (HPC and HTC and possibly cloud), a key to unlock an efficient and user-friendly access to the resources is being able to offload each specific task to the best suited location. One of the most critical problems involves the logistics of wide-area, multi-stage workflows that move back and forth between multiple resource providers. We envision a model where such a challenge can be addressed enabling a “transparent offloading” of containerized payloads using the Kubernetes API primitives creating a common cloud-native interface to access any number of external hardware machines and type of backends. Thus we created the interLink project, an open source extension to the concept of Virtual-Kubelet with a design that aims for a common abstraction over heterogeneous and distributed backends. interLink is developed by INFN in the context of interTwin, an EU funded project that aims to build a digital-twin platform (Digital Twin Engine) for sciences, and the ICSC National Research Center for High Performance Computing, Big Data and Quantum Computing in Italy. In this talk we first provide a comprehensive overview of the key features and the technical implementation. We showcase our major case studies, such as the scale-out of an analysis facility, and the distribution of ML training processes. We focus on the impacts of being able to seamlessly exploit world-class EuroHPC supercomputers with such a technology. |
| 536 | _ | _ | |a 5112 - Cross-Domain Algorithms, Tools, Methods Labs (ATMLs) and Research Groups (POF4-511) |0 G:(DE-HGF)POF4-5112 |c POF4-511 |f POF IV |x 0 |
| 536 | _ | _ | |a interTwin - An interdisciplinary Digital Twin Engine for science (101058386) |0 G:(EU-Grant)101058386 |c 101058386 |f HORIZON-INFRA-2021-TECH-01 |x 1 |
| 588 | _ | _ | |a Dataset connected to CrossRef, Journals: juser.fz-juelich.de |
| 700 | 1 | _ | |a Spiga, Daniele |0 P:(DE-HGF)0 |b 1 |
| 700 | 1 | _ | |a Memon, Ahmed Shiraz |0 P:(DE-Juel1)132191 |b 2 |u fzj |
| 700 | 1 | _ | |a Manzi, Andrea |0 P:(DE-HGF)0 |b 3 |
| 700 | 1 | _ | |a Filipcic, Andrej |0 P:(DE-HGF)0 |b 4 |
| 700 | 1 | _ | |a Troja, Antonino |0 P:(DE-HGF)0 |b 5 |
| 700 | 1 | _ | |a Fanzago, Federica |0 P:(DE-HGF)0 |b 6 |
| 700 | 1 | _ | |a Bianchini, Giulio |0 P:(DE-HGF)0 |b 7 |
| 700 | 1 | _ | |a Sgaravatto, Massimo |0 P:(DE-HGF)0 |b 8 |
| 700 | 1 | _ | |a Prica, Teo |0 P:(DE-HGF)0 |b 9 |
| 700 | 1 | _ | |a Boccali, Tommaso |0 P:(DE-HGF)0 |b 10 |
| 700 | 1 | _ | |a Tedeschi, Tommaso |0 P:(DE-HGF)0 |b 11 |
| 773 | _ | _ | |a 10.1051/epjconf/202533701296 |g Vol. 337, p. 01296 - |0 PERI:(DE-600)2595425-8 |p 01296 |v 337 |y 2025 |x 2100-014X |
| 856 | 4 | _ | |u https://juser.fz-juelich.de/record/1052682/files/epjconf_chep2025_01296.pdf |y OpenAccess |
| 909 | C | O | |o oai:juser.fz-juelich.de:1052682 |p openaire |p open_access |p driver |p VDB |p ec_fundedresources |p dnbdelivery |
| 910 | 1 | _ | |a Forschungszentrum Jülich |0 I:(DE-588b)5008462-8 |k FZJ |b 2 |6 P:(DE-Juel1)132191 |
| 913 | 1 | _ | |a DE-HGF |b Key Technologies |l Engineering Digital Futures – Supercomputing, Data Management and Information Security for Knowledge and Action |1 G:(DE-HGF)POF4-510 |0 G:(DE-HGF)POF4-511 |3 G:(DE-HGF)POF4 |2 G:(DE-HGF)POF4-500 |4 G:(DE-HGF)POF |v Enabling Computational- & Data-Intensive Science and Engineering |9 G:(DE-HGF)POF4-5112 |x 0 |
| 914 | 1 | _ | |y 2025 |
| 914 | 1 | _ | |y 2025 |
| 915 | _ | _ | |a OpenAccess |0 StatID:(DE-HGF)0510 |2 StatID |
| 915 | _ | _ | |a Creative Commons Attribution CC BY 4.0 |0 LIC:(DE-HGF)CCBY4 |2 HGFVOC |
| 915 | _ | _ | |a DBCoverage |0 StatID:(DE-HGF)0300 |2 StatID |b Medline |d 2025-11-11 |
| 915 | _ | _ | |a DBCoverage |0 StatID:(DE-HGF)0501 |2 StatID |b DOAJ Seal |d 2022-08-02T14:13:25Z |
| 915 | _ | _ | |a DBCoverage |0 StatID:(DE-HGF)0500 |2 StatID |b DOAJ |d 2022-08-02T14:13:25Z |
| 915 | _ | _ | |a Peer Review |0 StatID:(DE-HGF)0030 |2 StatID |b DOAJ : Anonymous peer review |d 2022-08-02T14:13:25Z |
| 920 | 1 | _ | |0 I:(DE-Juel1)JSC-20090406 |k JSC |l Jülich Supercomputing Center |x 0 |
| 980 | _ | _ | |a contrib |
| 980 | _ | _ | |a VDB |
| 980 | _ | _ | |a UNRESTRICTED |
| 980 | _ | _ | |a contb |
| 980 | _ | _ | |a I:(DE-Juel1)JSC-20090406 |
| 980 | 1 | _ | |a FullTexts |
| Library | Collection | CLSMajor | CLSMinor | Language | Author |
|---|