001     841390
005     20210129232018.0
020 _ _ |a 978-9935-9383-2-9
037 _ _ |a FZJ-2017-08465
041 _ _ |a English
100 1 _ |a Götz, Markus
|0 P:(DE-Juel1)162390
|b 0
|e Corresponding author
|u fzj
245 _ _ |a Scalable Data Analysis in High Performance Computing
|f 2014-04-01 - 2017-12-05
260 _ _ |a Reykjavik
|c 2017
|b Háskólaprent, Universität Island
300 _ _ |a 156 p.
336 7 _ |a Output Types/Dissertation
|2 DataCite
336 7 _ |a Book
|0 PUB:(DE-HGF)3
|2 PUB:(DE-HGF)
|m book
336 7 _ |a DISSERTATION
|2 ORCID
336 7 _ |a PHDTHESIS
|2 BibTeX
336 7 _ |a Thesis
|0 2
|2 EndNote
336 7 _ |a Dissertation / PhD Thesis
|b phd
|m phd
|0 PUB:(DE-HGF)11
|s 1513673730_27837
|2 PUB:(DE-HGF)
336 7 _ |a doctoralThesis
|2 DRIVER
502 _ _ |a Dissertation, Universität Island, 2017
|c Universität Island
|b Dissertation
|d 2017
|o 2017-12-05
520 _ _ |a Over the last decades one could observe a drastic increase in the generation and storage of data in both, industry and science. While the field of data analysis is not new, it is now facing the challenge of coping with an increasing size, bandwidth and complexity of data. This renders traditional analysis methods and algorithms ineffective. This problem has been coined as the Big Data challenge. Concretely in science the major data producers are large-scale monolithic experiments and the outputs of domain simulations. Up until now, most of this data has not yet been completely analyzed, but rather stored in data repositories for later consideration due to the lack of efficient means of processing. As a consequence, there is a need for large-scale data analysis frameworks and algorithm libraries allowing to study these datasets. In context of scientific applications, potentially coupled with legacy simulations, the designated target platform are heterogeneous high-performance computing systems.This thesis proposes a design and prototypical realization of such a framework based on the experience collected from empirical applications. For this, selected scientific use cases, with an emphasis on earth sciences, were studied. In particular, these are object segmentation in point cloud data and biological imagery, outlier detection in oceanographic time-series data as well as land cover type classification in remote sensing images. In order to deal with the data amounts, two analysis algorithms have been parallelized for shared- and distributed-memory systems. Concretely, these are HPDBSCAN, a density-based clustering algorithm, as well as Distributed Max-Trees, a filtering step for images. The presented parallelization strategies have been abstracted into a generalized paradigm, enabling the formulation of scalable algorithms for other similar analysis methods. Moreover, it permits the definition of requirements for the design of a large-scale data analysis framework and algorithm library for heterogeneous, distributed high-performance computing systems. In line with that, the thesis presents a prototypical realization called Juelich Machine Learning Library (JuML), providing essential low-level components and readily usable analysis algorithm implementations.
536 _ _ |a 512 - Data-Intensive Science and Federated Computing (POF3-512)
|0 G:(DE-HGF)POF3-512
|c POF3-512
|f POF III
|x 0
536 _ _ |0 G:(DE-Juel1)PHD-NO-GRANT-20170405
|x 1
|c PHD-NO-GRANT-20170405
|a PhD no Grant - Doktorand ohne besondere Förderung (PHD-NO-GRANT-20170405)
856 4 _ |u https://hdl.handle.net/20.500.11815/472
909 C O |o oai:juser.fz-juelich.de:841390
|p VDB
910 1 _ |a Forschungszentrum Jülich
|0 I:(DE-588b)5008462-8
|k FZJ
|b 0
|6 P:(DE-Juel1)162390
913 1 _ |a DE-HGF
|b Key Technologies
|l Supercomputing & Big Data
|1 G:(DE-HGF)POF3-510
|0 G:(DE-HGF)POF3-512
|2 G:(DE-HGF)POF3-500
|v Data-Intensive Science and Federated Computing
|x 0
|4 G:(DE-HGF)POF
|3 G:(DE-HGF)POF3
914 1 _ |y 2017
920 1 _ |0 I:(DE-Juel1)JSC-20090406
|k JSC
|l Jülich Supercomputing Center
|x 0
980 _ _ |a phd
980 _ _ |a VDB
980 _ _ |a book
980 _ _ |a I:(DE-Juel1)JSC-20090406
980 _ _ |a UNRESTRICTED


LibraryCollectionCLSMajorCLSMinorLanguageAuthor
Marc 21