Contribution to a conference proceedings/Contribution to a book FZJ-2021-02430

http://join2-wiki.gsi.de/foswiki/pub/Main/Artwork/join2_logo100x88.png
Detecting Disaster Before It Strikes: On the Challenges of Automated Building and Testing in HPC Environments

 ;  ;  ;  ;  ;

2021
Springer International Publishing Cham
ISBN: 978-3-030-66057-4

Tools for High Performance Computing 2018 / 2019 / Mix, Hartmut (Editor) ; Cham : Springer International Publishing, 2021, Chapter 1 ; ISBN: 978-3-030-66056-7 ; doi:10.1007/978-3-030-66057-4
12th International Parallel Tools Workshop, StuttgartStuttgart, Germany, 17 Sep 2018 - 18 Sep 20182018-09-172018-09-18
Cham : Springer International Publishing 3-26 () [10.1007/978-3-030-66057-4_1]

This record in other databases:

Please use a persistent id in citations:   doi:

Abstract: Software reliability is one of the cornerstones of any successful user experience. Software needs to build up the users’ trust in its fitness for a specific purpose. Software failures undermine this trust and add to user frustration that will ultimately lead to a termination of usage. Even beyond user expectations on the robustness of a software package, today’s scientific software is more than a temporary research prototype. It also forms the bedrock for successful scientific research in the future. A well-defined software engineering process that includes automated builds and tests is a key enabler for keeping software reliable in an agile scientific environment and should be of vital interest for any scientific software development team. While automated builds and deployment as well as systematic software testing have become common practice when developing software in industry, it is rarely used for scientific software, including tools. Potential reasons are that (1) in contrast to computer scientists, domain scientists from other fields usually never get exposed to such techniques during their training, (2) building up the necessary infrastructures is often considered overhead that distracts from the real science, (3) interdisciplinary research teams are still rare, and (4) high-performance computing systems and their programming environments are less standardized, such that published recipes can often not be applied without heavy modification. In this work, we will present the various challenges we encountered while setting up an automated building and testing infrastructure for the Score-P, Scalasca, and Cube projects. We will outline our current approaches, alternatives that have been considered, and the remaining open issues that still need to be addressed—to further increase the software quality and thus, ultimately improve user experience.


Contributing Institute(s):
  1. Jülich Supercomputing Center (JSC)
Research Program(s):
  1. 511 - Enabling Computational- & Data-Intensive Science and Engineering (POF4-511) (POF4-511)
  2. ATMLPP - ATML Parallel Performance (ATMLPP) (ATMLPP)

Appears in the scientific report 2021
Database coverage:
OpenAccess
Click to display QR Code for this record

The record appears in these collections:
Document types > Events > Contributions to a conference proceedings
Document types > Books > Contribution to a book
Workflow collections > Public records
Institute Collections > JSC
Publications database
Open Access

 Record created 2021-05-31, last modified 2025-03-14


OpenAccess:
Download fulltext PDF
External link:
Download fulltextFulltext by OpenAccess repository
Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)