001     1048764
005     20251217202226.0
020 _ _ |a 978-989-758-728-3
024 7 _ |a 10.5220/0013167900003912
|2 doi
037 _ _ |a FZJ-2025-04879
041 _ _ |a English
100 1 _ |a Wang, Qin
|0 P:(DE-Juel1)190396
|b 0
|u fzj
111 2 _ |a 20th International Conference on Computer Vision Theory and Applications
|c Porto
|d 2025-02-26 - 2025-02-28
|w Portugal
245 _ _ |a Rescuing Easy Samples in Self-Supervised Pretraining
260 _ _ |c 2025
|b SCITEPRESS - Science and Technology Publications
295 1 0 |a Proceedings of the 20th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - SCITEPRESS - Science and Technology Publications, 2025. - ISBN 978-989-758-728-3 - doi:10.5220/0013167900003912
300 _ _ |a 400-409
336 7 _ |a CONFERENCE_PAPER
|2 ORCID
336 7 _ |a Conference Paper
|0 33
|2 EndNote
336 7 _ |a INPROCEEDINGS
|2 BibTeX
336 7 _ |a conferenceObject
|2 DRIVER
336 7 _ |a Output Types/Conference Paper
|2 DataCite
336 7 _ |a Contribution to a conference proceedings
|b contrib
|m contrib
|0 PUB:(DE-HGF)8
|s 1765992935_19331
|2 PUB:(DE-HGF)
336 7 _ |a Contribution to a book
|0 PUB:(DE-HGF)7
|2 PUB:(DE-HGF)
|m contb
520 _ _ |a Many recent self-supervised pretraining methods use augmented versions of the same image as samples for their learning schemes. We observe that ’easy’ samples, i.e. samples being too similar to each other after augmentation, have only limited value as learning signal. We therefore propose to rescue easy samples and make them harder. To do so, we select the top k easiest samples using cosine similarity, strongly augment them, forward-pass them through the model, calculate cosine similarity of the output as loss, and add it to the original loss in a weighted fashion. This method can be adopted to all contrastive or other augmented-pair based learning methods, whether they involve negative pairs or not, as it changes handling of easy positives, only. This simple but effective approach introduces greater variability into such self-supervised pretraining processes, significantly increasing the performance on various downstream tasks as observed in our experiments. We pretrain models of di fferent sizes, i.e. ResNet-50, ViT-S, ViT-B, or ViT-L, using ImageNet with SimCLR, MoCo v3, or DINOv2 training schemes. Here, e.g., we consistently find to improve results for ImageNet top-1 accuracy with a linear classifier establishing new SOTA for this task.
536 _ _ |a 5112 - Cross-Domain Algorithms, Tools, Methods Labs (ATMLs) and Research Groups (POF4-511)
|0 G:(DE-HGF)POF4-5112
|c POF4-511
|f POF IV
|x 0
536 _ _ |a 5111 - Domain-Specific Simulation & Data Life Cycle Labs (SDLs) and Research Groups (POF4-511)
|0 G:(DE-HGF)POF4-5111
|c POF4-511
|f POF IV
|x 1
536 _ _ |a SLNS - SimLab Neuroscience (Helmholtz-SLNS)
|0 G:(DE-Juel1)Helmholtz-SLNS
|c Helmholtz-SLNS
|x 2
588 _ _ |a Dataset connected to CrossRef Conference
700 1 _ |a Krajsek, Kai
|0 P:(DE-Juel1)129347
|b 1
|u fzj
700 1 _ |a Scharr, Hanno
|0 P:(DE-Juel1)129394
|b 2
|u fzj
770 _ _ |a SCITEPRESS - Science and Technology Publications
773 _ _ |a 10.5220/0013167900003912
|p 400 - 409
|y 2025
856 4 _ |u https://www.scitepress.org/Link.aspx?doi=10.5220/0013167900003912
856 4 _ |u https://juser.fz-juelich.de/record/1048764/files/131679.pdf
|y Restricted
909 C O |o oai:juser.fz-juelich.de:1048764
|p VDB
910 1 _ |a Forschungszentrum Jülich
|0 I:(DE-588b)5008462-8
|k FZJ
|b 0
|6 P:(DE-Juel1)190396
910 1 _ |a Forschungszentrum Jülich
|0 I:(DE-588b)5008462-8
|k FZJ
|b 1
|6 P:(DE-Juel1)129347
910 1 _ |a Forschungszentrum Jülich
|0 I:(DE-588b)5008462-8
|k FZJ
|b 2
|6 P:(DE-Juel1)129394
913 1 _ |a DE-HGF
|b Key Technologies
|l Engineering Digital Futures – Supercomputing, Data Management and Information Security for Knowledge and Action
|1 G:(DE-HGF)POF4-510
|0 G:(DE-HGF)POF4-511
|3 G:(DE-HGF)POF4
|2 G:(DE-HGF)POF4-500
|4 G:(DE-HGF)POF
|v Enabling Computational- & Data-Intensive Science and Engineering
|9 G:(DE-HGF)POF4-5112
|x 0
913 1 _ |a DE-HGF
|b Key Technologies
|l Engineering Digital Futures – Supercomputing, Data Management and Information Security for Knowledge and Action
|1 G:(DE-HGF)POF4-510
|0 G:(DE-HGF)POF4-511
|3 G:(DE-HGF)POF4
|2 G:(DE-HGF)POF4-500
|4 G:(DE-HGF)POF
|v Enabling Computational- & Data-Intensive Science and Engineering
|9 G:(DE-HGF)POF4-5111
|x 1
914 1 _ |y 2025
920 _ _ |l yes
920 1 _ |0 I:(DE-Juel1)IAS-8-20210421
|k IAS-8
|l Datenanalyse und Maschinenlernen
|x 0
920 1 _ |0 I:(DE-Juel1)JSC-20090406
|k JSC
|l Jülich Supercomputing Center
|x 1
980 _ _ |a contrib
980 _ _ |a VDB
980 _ _ |a contb
980 _ _ |a I:(DE-Juel1)IAS-8-20210421
980 _ _ |a I:(DE-Juel1)JSC-20090406
980 _ _ |a UNRESTRICTED


LibraryCollectionCLSMajorCLSMinorLanguageAuthor
Marc 21