Contribution to a conference proceedings/Contribution to a book FZJ-2026-00960

http://join2-wiki.gsi.de/foswiki/pub/Main/Artwork/join2_logo100x88.png
Hybrid Inference Optimization for AI-Enhanced Turbulent Boundary Layer Simulation on Heterogeneous Systems

 ;  ;  ;  ;  ;

2026
ACM New York, NY, USA

Proceedings of the Supercomputing Asia and International Conference on High Performance Computing in Asia Pacific Region Workshops
SCA/HPCAsia 2026 Workshops: Supercomputing Asia and International Conference on High Performance Computing in Asia Pacific Region Workshops, SCA/HPCAsia 2026, OsakaOsaka, Japan, 26 Jan 2026 - 29 Jan 20262026-01-262026-01-29
New York, NY, USA : ACM 165-176 () [10.1145/3784828.3785255]

This record in other databases:  

Please use a persistent id in citations: doi:  doi:

Abstract: Active drag reduction (ADR) using spanwise traveling surface waves is a promising approach to reduce drag of airplanes by manipulating the turbulent boundary layer (TBL) around an airfoil, which directly translates into power savings and lower emission of greenhouse gases harming the environment. However, no analytical solution is known to determine the optimal actuation parameters of these surface waves based on given flow conditions. Data-driven deep learning (DL) techniques from artificial intelligence (AI) area promising alterna tive approach, but their training requires a huge amount of high-fidelity data from computationally expensive computational fluid dynamics (CFD) simulations. Previous works proposed a TBL-Transformer architecture for the expensive time-marching of turbulent flow fields and coupled it with a finite volume solver from the multi-physics PDE solver framework m-AIA to accelerate the generation of TBL data. To accelerate the computationally expensive inference of the TBL-Transformer, the AIxeleratorService library was used to offload the inference task to GPUs. While this approach significantly accelerates the inference task, it leaves the CPU resources allocated by the solver unutilized during inference. To fully exploit modern heterogeneous computer systems, we introduce a hybrid inference method based on a hybrid work distribution model and implement it into the AIxeleratorService library. Moreover, we present a formal model to derive the optimal hybrid work distribution. To evaluate the computational performance and scalability of hybrid inference, we benchmark the coupled m-AIA solver from previous work on a heterogeneous HPC system comprising Intel Sapphire Rapids CPUs and NVIDIA H100 GPUs. Our results show that hybrid inference achieves a performance speedup, that grows as the ratio of allocated CPU cores to GPU devices increases. We further demonstrate that the runtime improvement by hybrid inference also increases the energy efficiency of the coupled solver application. Finally, we highlight that the theoretical hybrid work distribution derived from our formal model yields near optimal results in practice.


Contributing Institute(s):
  1. Jülich Supercomputing Center (JSC)
Research Program(s):
  1. 5111 - Domain-Specific Simulation & Data Life Cycle Labs (SDLs) and Research Groups (POF4-511) (POF4-511)
  2. SDLFSE - SDL Fluids & Solids Engineering (SDLFSE) (SDLFSE)
  3. RAISE - Research on AI- and Simulation-Based Engineering at Exascale (951733) (951733)

Appears in the scientific report 2026
Database coverage:
OpenAccess
Click to display QR Code for this record

The record appears in these collections:
Document types > Events > Contributions to a conference proceedings
Document types > Books > Contribution to a book
Workflow collections > Public records
Institute Collections > JSC
Online First

 Record created 2026-01-23, last modified 2026-01-27


OpenAccess:
Download fulltext PDF
Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)