001     904999
005     20220131120444.0
037 _ _ |a FZJ-2022-00310
041 _ _ |a English
100 1 _ |a Wang, Qin
|0 P:(DE-Juel1)190396
|b 0
|e Corresponding author
245 _ _ |a Deep learning for segmentation of 3D-PLI images
|f - 2021-03-16
260 _ _ |c 2021
300 _ _ |a 60
336 7 _ |a Output Types/Supervised Student Publication
|2 DataCite
336 7 _ |a Thesis
|0 2
|2 EndNote
336 7 _ |a MASTERSTHESIS
|2 BibTeX
336 7 _ |a masterThesis
|2 DRIVER
336 7 _ |a Master Thesis
|b master
|m master
|0 PUB:(DE-HGF)19
|s 1641825960_21106
|2 PUB:(DE-HGF)
336 7 _ |a SUPERVISED_STUDENT_PUBLICATION
|2 ORCID
502 _ _ |a Masterarbeit, RWTH Aachen, 2021
|c RWTH Aachen
|b Masterarbeit
|d 2021
520 _ _ |a 3D polarized light imaging (3D-PLI) technology is a neuroimaging technique used to capture high-resolution images of thinly sliced segments of brains. Polarizing microscope (PM) images are captured using 3D-PLI technology to create three- dimensional brain models. Before construction, we need to discriminate brain tissue from the background in PM images through image segmentation. Labeling PM images is time consuming because of their ultra-high resolutions. Consequently, we cannot employ supervised learning for PM image segmentation because it requires a large amount of data for training. Recently, self-supervised learning was proposed to alleviate the drawback of insufficiently-labeled data by utilizing unlabeled data.Self-supervised learning is a means for pretraining neural networks to extract image features without labeled data, and then fine-tunes supervised learning networks. It is possible to solve the insufficient labeled PM images problem. In self-supervised learning, the tasks that we use for pre-training are known as the “upstream tasks”. And the tasks that we use for fine-tuning are known as the “downstream tasks”. In this thesis, we explore different self-supervised learning approaches and make quantitative comparisons. Before the self-supervised learning, we begin by presenting the k-means-based image clustering method in which deep neural networks are employed for feature vector extraction. In this way, the clustering method can be used to identify similar images, avoiding the need to manually annotate similar images. Furthermore, to address the lack of training data and make full use of the unlabeled dataset, we implement a couple of self-supervised learning methods and compare the Dice coefficient metric to the baseline model. The self-supervised learning methods we present have two parts. The first one is pretext supervised learning, whereby we describe several upstream tasks, rotation, jigsaw, and inpainting, for example, and experiments on a Pascal VOC dataset and PM image dataset. A contrastive learning method is presented in the second part, in which ablation experiments are conducted for evaluation.
536 _ _ |a 5111 - Domain-Specific Simulation & Data Life Cycle Labs (SDLs) and Research Groups (POF4-511)
|0 G:(DE-HGF)POF4-5111
|c POF4-511
|f POF IV
|x 0
536 _ _ |a SLNS - SimLab Neuroscience (Helmholtz-SLNS)
|0 G:(DE-Juel1)Helmholtz-SLNS
|c Helmholtz-SLNS
|x 1
909 C O |o oai:juser.fz-juelich.de:904999
|p VDB
910 1 _ |a Forschungszentrum Jülich
|0 I:(DE-588b)5008462-8
|k FZJ
|b 0
|6 P:(DE-Juel1)190396
913 1 _ |a DE-HGF
|b Key Technologies
|l Engineering Digital Futures – Supercomputing, Data Management and Information Security for Knowledge and Action
|1 G:(DE-HGF)POF4-510
|0 G:(DE-HGF)POF4-511
|3 G:(DE-HGF)POF4
|2 G:(DE-HGF)POF4-500
|4 G:(DE-HGF)POF
|v Enabling Computational- & Data-Intensive Science and Engineering
|9 G:(DE-HGF)POF4-5111
|x 0
914 1 _ |y 2021
920 _ _ |l yes
920 1 _ |0 I:(DE-Juel1)JSC-20090406
|k JSC
|l Jülich Supercomputing Center
|x 0
980 _ _ |a master
980 _ _ |a VDB
980 _ _ |a I:(DE-Juel1)JSC-20090406
980 _ _ |a UNRESTRICTED


LibraryCollectionCLSMajorCLSMinorLanguageAuthor
Marc 21