000904999 001__ 904999
000904999 005__ 20220131120444.0
000904999 037__ $$aFZJ-2022-00310
000904999 041__ $$aEnglish
000904999 1001_ $$0P:(DE-Juel1)190396$$aWang, Qin$$b0$$eCorresponding author
000904999 245__ $$aDeep learning for segmentation of 3D-PLI images$$f - 2021-03-16
000904999 260__ $$c2021
000904999 300__ $$a60
000904999 3367_ $$2DataCite$$aOutput Types/Supervised Student Publication
000904999 3367_ $$02$$2EndNote$$aThesis
000904999 3367_ $$2BibTeX$$aMASTERSTHESIS
000904999 3367_ $$2DRIVER$$amasterThesis
000904999 3367_ $$0PUB:(DE-HGF)19$$2PUB:(DE-HGF)$$aMaster Thesis$$bmaster$$mmaster$$s1641825960_21106
000904999 3367_ $$2ORCID$$aSUPERVISED_STUDENT_PUBLICATION
000904999 502__ $$aMasterarbeit, RWTH Aachen, 2021$$bMasterarbeit$$cRWTH Aachen$$d2021
000904999 520__ $$a3D polarized light imaging (3D-PLI) technology is a neuroimaging technique used to capture high-resolution images of thinly sliced segments of brains. Polarizing microscope (PM) images are captured using 3D-PLI technology to create three- dimensional brain models. Before construction, we need to discriminate brain tissue from the background in PM images through image segmentation. Labeling PM images is time consuming because of their ultra-high resolutions. Consequently, we cannot employ supervised learning for PM image segmentation because it requires a large amount of data for training. Recently, self-supervised learning was proposed to alleviate the drawback of insufficiently-labeled data by utilizing unlabeled data.Self-supervised learning is a means for pretraining neural networks to extract image features without labeled data, and then fine-tunes supervised learning networks. It is possible to solve the insufficient labeled PM images problem. In self-supervised learning, the tasks that we use for pre-training are known as the “upstream tasks”. And the tasks that we use for fine-tuning are known as the “downstream tasks”. In this thesis, we explore different self-supervised learning approaches and make quantitative comparisons. Before the self-supervised learning, we begin by presenting the k-means-based image clustering method in which deep neural networks are employed for feature vector extraction. In this way, the clustering method can be used to identify similar images, avoiding the need to manually annotate similar images. Furthermore, to address the lack of training data and make full use of the unlabeled dataset, we implement a couple of self-supervised learning methods and compare the Dice coefficient metric to the baseline model. The self-supervised learning methods we present have two parts. The first one is pretext supervised learning, whereby we describe several upstream tasks, rotation, jigsaw, and inpainting, for example, and experiments on a Pascal VOC dataset and PM image dataset. A contrastive learning method is presented in the second part, in which ablation experiments are conducted for evaluation.
000904999 536__ $$0G:(DE-HGF)POF4-5111$$a5111 - Domain-Specific Simulation & Data Life Cycle Labs (SDLs) and Research Groups (POF4-511)$$cPOF4-511$$fPOF IV$$x0
000904999 536__ $$0G:(DE-Juel1)Helmholtz-SLNS$$aSLNS - SimLab Neuroscience (Helmholtz-SLNS)$$cHelmholtz-SLNS$$x1
000904999 909CO $$ooai:juser.fz-juelich.de:904999$$pVDB
000904999 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)190396$$aForschungszentrum Jülich$$b0$$kFZJ
000904999 9131_ $$0G:(DE-HGF)POF4-511$$1G:(DE-HGF)POF4-510$$2G:(DE-HGF)POF4-500$$3G:(DE-HGF)POF4$$4G:(DE-HGF)POF$$9G:(DE-HGF)POF4-5111$$aDE-HGF$$bKey Technologies$$lEngineering Digital Futures – Supercomputing, Data Management and Information Security for Knowledge and Action$$vEnabling Computational- & Data-Intensive Science and Engineering$$x0
000904999 9141_ $$y2021
000904999 920__ $$lyes
000904999 9201_ $$0I:(DE-Juel1)JSC-20090406$$kJSC$$lJülich Supercomputing Center$$x0
000904999 980__ $$amaster
000904999 980__ $$aVDB
000904999 980__ $$aI:(DE-Juel1)JSC-20090406
000904999 980__ $$aUNRESTRICTED