Home > Publications database > The missing link between massive data and AI: parallel computing with Heat |
Conference Presentation (After Call) | FZJ-2023-01239 |
; ; ;
2022
This record in other databases:
Please use a persistent id in citations: doi:10.5281/ZENODO.7637978
Abstract: When it comes to enhancing exploitation of massive data, machine learning and AI methods are very much at the forefront of our awareness. Much less so is the need for, and complexity of, applying these techniques efficiently across memory-distributed data volumes. Heat [1, 2] is an open-source Python library for high-performance data analytics, machine learning, and deep learning. It provides highly optimized algorithms and data structures for tensor computations using CPUs, GPUs and distributed cluster systems. Heat's Numpy-like API makes writing scalable, GPU-accelerated applications straightforward - at the same time, parallelism implemented under the hood via MPI provides a significant improvement in efficiency and performance with respect to, e.g., Dask. Born out of a large-scale collaboration in applied sciences, Heat also acts a platform for collaboration and knowledge transfer within data-intensive science. In this presentation, I will show you the inner workings of the library, tell you about our collaborations with the astrophysics and space science community (massively parallel signal-processing capabilities for the SKA-MPG telescope among others) and hopefully gain from you some insight into how to best support data- intensive astro operations going forward. References: [1] Gotz, M., Debus, C., Coquelin, et al.: 'HeAT - a Distributed and GPU-accelerated Tensor Framework for Data Analytics'; [2] https://github.com/helmholtz-analytics/heat
Keyword(s): memory-distributed computing ; parallel computing ; data-intensive science ; Big Data Analytics ; Python ; Message Passing Interface ; PyTorch ; NumPy ; machine learning
![]() |
The record appears in these collections: |