001037598 001__ 1037598
001037598 005__ 20250312202213.0
001037598 0247_ $$2doi$$a10.3389/frsen.2024.1480101
001037598 0247_ $$2datacite_doi$$a10.34734/FZJ-2025-00769
001037598 0247_ $$2WOS$$aWOS:001380631400001
001037598 037__ $$aFZJ-2025-00769
001037598 082__ $$a600
001037598 1001_ $$0P:(DE-Juel1)186635$$aPatnala, Ankit$$b0$$eCorresponding author$$ufzj
001037598 245__ $$aBi-modal contrastive learning for crop classification using Sentinel-2 and Planetscope
001037598 260__ $$aLausanne$$bFrontiers Media$$c2024
001037598 3367_ $$2DRIVER$$aarticle
001037598 3367_ $$2DataCite$$aOutput Types/Journal article
001037598 3367_ $$0PUB:(DE-HGF)16$$2PUB:(DE-HGF)$$aJournal Article$$bjournal$$mjournal$$s1737374917_6056
001037598 3367_ $$2BibTeX$$aARTICLE
001037598 3367_ $$2ORCID$$aJOURNAL_ARTICLE
001037598 3367_ $$00$$2EndNote$$aJournal Article
001037598 520__ $$aRemote sensing has enabled large-scale crop classification for understanding agricultural ecosystems and estimating production yields. In recent years, machine learning has become increasingly relevant for automated crop classification. However, the existing algorithms require a huge amount of annotated data. Self-supervised learning, which enables training on unlabeled data, has great potential to overcome the problem of annotation. Contrastive learning, a self-supervised approach based on instance discrimination, has shown promising results in the field of natural as well as remote sensing images. Crop data often consists of field parcels or sets of pixels from small spatial regions. Additionally, one needs to account for temporal patterns to correctly label crops. Hence, the standard approaches for landcover classification cannot be applied. In this work, we propose two contrastive self-supervised learning approaches to obtain a pre-trained model for crop classification without the need for labeled data. First, we adopt the uni-modal contrastive method (SCARF) and, second, we use a bi-modal approach based on Sentinel-2 and Planetscope data instead of standard transformations developed for natural images to accommodate the spectral characteristics of crop pixels. Evaluation in three regions of Germany and France shows that crop classification with the pre-trained multi-modal model is superior to the pre-trained uni-modal method as well as the supervised baseline models in the majority of test cases.
001037598 536__ $$0G:(DE-HGF)POF4-5111$$a5111 - Domain-Specific Simulation & Data Life Cycle Labs (SDLs) and Research Groups (POF4-511)$$cPOF4-511$$fPOF IV$$x0
001037598 588__ $$aDataset connected to CrossRef, Journals: juser.fz-juelich.de
001037598 7001_ $$0P:(DE-HGF)0$$aStadtler, Scarlet$$b1
001037598 7001_ $$0P:(DE-Juel1)6952$$aSchultz, Martin G.$$b2$$ufzj
001037598 7001_ $$0P:(DE-HGF)0$$aGall, Juergen$$b3
001037598 773__ $$0PERI:(DE-600)3091289-1$$a10.3389/frsen.2024.1480101$$gVol. 5, p. 1480101$$p1480101$$tFrontiers in remote sensing$$v5$$x2673-6187$$y2024
001037598 8564_ $$uhttps://juser.fz-juelich.de/record/1037598/files/frsen-1-1480101.pdf$$yOpenAccess
001037598 8767_ $$d2025-03-12$$eAPC$$jDeposit$$zUSD 1806,25
001037598 909CO $$ooai:juser.fz-juelich.de:1037598$$pVDB$$pdriver$$pOpenAPC$$popen_access$$popenaire$$popenCost$$pdnbdelivery
001037598 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)186635$$aForschungszentrum Jülich$$b0$$kFZJ
001037598 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)6952$$aForschungszentrum Jülich$$b2$$kFZJ
001037598 9131_ $$0G:(DE-HGF)POF4-511$$1G:(DE-HGF)POF4-510$$2G:(DE-HGF)POF4-500$$3G:(DE-HGF)POF4$$4G:(DE-HGF)POF$$9G:(DE-HGF)POF4-5111$$aDE-HGF$$bKey Technologies$$lEngineering Digital Futures – Supercomputing, Data Management and Information Security for Knowledge and Action$$vEnabling Computational- & Data-Intensive Science and Engineering$$x0
001037598 9141_ $$y2024
001037598 915__ $$0StatID:(DE-HGF)0200$$2StatID$$aDBCoverage$$bSCOPUS$$d2024-12-13
001037598 915__ $$0LIC:(DE-HGF)CCBY4$$2HGFVOC$$aCreative Commons Attribution CC BY 4.0
001037598 915__ $$0StatID:(DE-HGF)0501$$2StatID$$aDBCoverage$$bDOAJ Seal$$d2024-01-17T19:06:03Z
001037598 915__ $$0StatID:(DE-HGF)0112$$2StatID$$aWoS$$bEmerging Sources Citation Index$$d2024-12-13
001037598 915__ $$0StatID:(DE-HGF)0500$$2StatID$$aDBCoverage$$bDOAJ$$d2024-01-17T19:06:03Z
001037598 915__ $$0StatID:(DE-HGF)0150$$2StatID$$aDBCoverage$$bWeb of Science Core Collection$$d2024-12-13
001037598 915__ $$0StatID:(DE-HGF)0510$$2StatID$$aOpenAccess
001037598 915__ $$0StatID:(DE-HGF)0030$$2StatID$$aPeer Review$$bDOAJ : Anonymous peer review$$d2024-01-17T19:06:03Z
001037598 915__ $$0StatID:(DE-HGF)0561$$2StatID$$aArticle Processing Charges$$d2024-12-13
001037598 915__ $$0StatID:(DE-HGF)0700$$2StatID$$aFees$$d2024-12-13
001037598 915__ $$0StatID:(DE-HGF)0199$$2StatID$$aDBCoverage$$bClarivate Analytics Master Journal List$$d2024-12-13
001037598 915pc $$0PC:(DE-HGF)0000$$2APC$$aAPC keys set
001037598 915pc $$0PC:(DE-HGF)0001$$2APC$$aLocal Funding
001037598 915pc $$0PC:(DE-HGF)0002$$2APC$$aDFG OA Publikationskosten
001037598 915pc $$0PC:(DE-HGF)0003$$2APC$$aDOAJ Journal
001037598 920__ $$lyes
001037598 9201_ $$0I:(DE-Juel1)JSC-20090406$$kJSC$$lJülich Supercomputing Center$$x0
001037598 9801_ $$aFullTexts
001037598 980__ $$ajournal
001037598 980__ $$aVDB
001037598 980__ $$aUNRESTRICTED
001037598 980__ $$aI:(DE-Juel1)JSC-20090406
001037598 980__ $$aAPC