001048764 001__ 1048764
001048764 005__ 20251217202226.0
001048764 020__ $$a978-989-758-728-3
001048764 0247_ $$2doi$$a10.5220/0013167900003912
001048764 037__ $$aFZJ-2025-04879
001048764 041__ $$aEnglish
001048764 1001_ $$0P:(DE-Juel1)190396$$aWang, Qin$$b0$$ufzj
001048764 1112_ $$a20th International Conference on Computer Vision Theory and Applications$$cPorto$$d2025-02-26 - 2025-02-28$$wPortugal
001048764 245__ $$aRescuing Easy Samples in Self-Supervised Pretraining
001048764 260__ $$bSCITEPRESS - Science and Technology Publications$$c2025
001048764 29510 $$aProceedings of the 20th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - SCITEPRESS - Science and Technology Publications, 2025. - ISBN 978-989-758-728-3 - doi:10.5220/0013167900003912
001048764 300__ $$a400-409
001048764 3367_ $$2ORCID$$aCONFERENCE_PAPER
001048764 3367_ $$033$$2EndNote$$aConference Paper
001048764 3367_ $$2BibTeX$$aINPROCEEDINGS
001048764 3367_ $$2DRIVER$$aconferenceObject
001048764 3367_ $$2DataCite$$aOutput Types/Conference Paper
001048764 3367_ $$0PUB:(DE-HGF)8$$2PUB:(DE-HGF)$$aContribution to a conference proceedings$$bcontrib$$mcontrib$$s1765992935_19331
001048764 3367_ $$0PUB:(DE-HGF)7$$2PUB:(DE-HGF)$$aContribution to a book$$mcontb
001048764 520__ $$aMany recent self-supervised pretraining methods use augmented versions of the same image as samples for their learning schemes. We observe that ’easy’ samples, i.e. samples being too similar to each other after augmentation, have only limited value as learning signal. We therefore propose to rescue easy samples and make them harder. To do so, we select the top k easiest samples using cosine similarity, strongly augment them, forward-pass them through the model, calculate cosine similarity of the output as loss, and add it to the original loss in a weighted fashion. This method can be adopted to all contrastive or other augmented-pair based learning methods, whether they involve negative pairs or not, as it changes handling of easy positives, only. This simple but effective approach introduces greater variability into such self-supervised pretraining processes, significantly increasing the performance on various downstream tasks as observed in our experiments. We pretrain models of di fferent sizes, i.e. ResNet-50, ViT-S, ViT-B, or ViT-L, using ImageNet with SimCLR, MoCo v3, or DINOv2 training schemes. Here, e.g., we consistently find to improve results for ImageNet top-1 accuracy with a linear classifier establishing new SOTA for this task.
001048764 536__ $$0G:(DE-HGF)POF4-5112$$a5112 - Cross-Domain Algorithms, Tools, Methods Labs (ATMLs) and Research Groups (POF4-511)$$cPOF4-511$$fPOF IV$$x0
001048764 536__ $$0G:(DE-HGF)POF4-5111$$a5111 - Domain-Specific Simulation & Data Life Cycle Labs (SDLs) and Research Groups (POF4-511)$$cPOF4-511$$fPOF IV$$x1
001048764 536__ $$0G:(DE-Juel1)Helmholtz-SLNS$$aSLNS - SimLab Neuroscience (Helmholtz-SLNS)$$cHelmholtz-SLNS$$x2
001048764 588__ $$aDataset connected to CrossRef Conference
001048764 7001_ $$0P:(DE-Juel1)129347$$aKrajsek, Kai$$b1$$ufzj
001048764 7001_ $$0P:(DE-Juel1)129394$$aScharr, Hanno$$b2$$ufzj
001048764 770__ $$aSCITEPRESS - Science and Technology Publications
001048764 773__ $$a10.5220/0013167900003912$$p400 - 409$$y2025
001048764 8564_ $$uhttps://www.scitepress.org/Link.aspx?doi=10.5220/0013167900003912
001048764 8564_ $$uhttps://juser.fz-juelich.de/record/1048764/files/131679.pdf$$yRestricted
001048764 909CO $$ooai:juser.fz-juelich.de:1048764$$pVDB
001048764 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)190396$$aForschungszentrum Jülich$$b0$$kFZJ
001048764 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)129347$$aForschungszentrum Jülich$$b1$$kFZJ
001048764 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)129394$$aForschungszentrum Jülich$$b2$$kFZJ
001048764 9131_ $$0G:(DE-HGF)POF4-511$$1G:(DE-HGF)POF4-510$$2G:(DE-HGF)POF4-500$$3G:(DE-HGF)POF4$$4G:(DE-HGF)POF$$9G:(DE-HGF)POF4-5112$$aDE-HGF$$bKey Technologies$$lEngineering Digital Futures – Supercomputing, Data Management and Information Security for Knowledge and Action$$vEnabling Computational- & Data-Intensive Science and Engineering$$x0
001048764 9131_ $$0G:(DE-HGF)POF4-511$$1G:(DE-HGF)POF4-510$$2G:(DE-HGF)POF4-500$$3G:(DE-HGF)POF4$$4G:(DE-HGF)POF$$9G:(DE-HGF)POF4-5111$$aDE-HGF$$bKey Technologies$$lEngineering Digital Futures – Supercomputing, Data Management and Information Security for Knowledge and Action$$vEnabling Computational- & Data-Intensive Science and Engineering$$x1
001048764 9141_ $$y2025
001048764 920__ $$lyes
001048764 9201_ $$0I:(DE-Juel1)IAS-8-20210421$$kIAS-8$$lDatenanalyse und Maschinenlernen$$x0
001048764 9201_ $$0I:(DE-Juel1)JSC-20090406$$kJSC$$lJülich Supercomputing Center$$x1
001048764 980__ $$acontrib
001048764 980__ $$aVDB
001048764 980__ $$acontb
001048764 980__ $$aI:(DE-Juel1)IAS-8-20210421
001048764 980__ $$aI:(DE-Juel1)JSC-20090406
001048764 980__ $$aUNRESTRICTED