001033893 001__ 1033893
001033893 005__ 20241217215531.0
001033893 0247_ $$2doi$$a10.48550/ARXIV.2409.17085
001033893 0247_ $$2datacite_doi$$a10.34734/FZJ-2024-06731
001033893 037__ $$aFZJ-2024-06731
001033893 1001_ $$0P:(DE-Juel1)175101$$aPaul, Richard D.$$b0$$eCorresponding author$$ufzj
001033893 245__ $$aParameter-efficient Bayesian Neural Networks for Uncertainty-aware Depth Estimation
001033893 260__ $$barXiv$$c2024
001033893 3367_ $$0PUB:(DE-HGF)25$$2PUB:(DE-HGF)$$aPreprint$$bpreprint$$mpreprint$$s1734418575_31226
001033893 3367_ $$2ORCID$$aWORKING_PAPER
001033893 3367_ $$028$$2EndNote$$aElectronic Article
001033893 3367_ $$2DRIVER$$apreprint
001033893 3367_ $$2BibTeX$$aARTICLE
001033893 3367_ $$2DataCite$$aOutput Types/Working Paper
001033893 500__ $$aPresented as an Extended Abstract at the 3rd Workshop on Uncertainty Quantification for Computer Vision at the ECCV'24.
001033893 520__ $$aState-of-the-art computer vision tasks, like monocular depth estimation (MDE), rely heavily on large, modern Transformer-based architectures. However, their application in safety-critical domains demands reliable predictive performance and uncertainty quantification. While Bayesian neural networks provide a conceptually simple approach to serve those requirements, they suffer from the high dimensionality of the parameter space. Parameter-efficient fine-tuning (PEFT) methods, in particular low-rank adaptations (LoRA), have emerged as a popular strategy for adapting large-scale models to down-stream tasks by performing parameter inference on lower-dimensional subspaces. In this work, we investigate the suitability of PEFT methods for subspace Bayesian inference in large-scale Transformer-based vision models. We show that, indeed, combining BitFit, DiffFit, LoRA, and CoLoRA, a novel LoRA-inspired PEFT method, with Bayesian inference enables more robust and reliable predictive performance in MDE.
001033893 536__ $$0G:(DE-HGF)POF4-5112$$a5112 - Cross-Domain Algorithms, Tools, Methods Labs (ATMLs) and Research Groups (POF4-511)$$cPOF4-511$$fPOF IV$$x0
001033893 588__ $$aDataset connected to DataCite
001033893 650_7 $$2Other$$aComputer Vision and Pattern Recognition (cs.CV)
001033893 650_7 $$2Other$$aMachine Learning (stat.ML)
001033893 650_7 $$2Other$$aFOS: Computer and information sciences
001033893 7001_ $$0P:(DE-Juel1)188471$$aQuercia, Alessio$$b1$$ufzj
001033893 7001_ $$0P:(DE-HGF)0$$aFortuin, Vincent$$b2
001033893 7001_ $$0P:(DE-Juel1)129051$$aNöh, Katharina$$b3$$ufzj
001033893 7001_ $$0P:(DE-Juel1)129394$$aScharr, Hanno$$b4$$ufzj
001033893 773__ $$a10.48550/ARXIV.2409.17085
001033893 8564_ $$uhttps://arxiv.org/abs/2409.17085
001033893 8564_ $$uhttps://juser.fz-juelich.de/record/1033893/files/2409.17085v1.pdf$$yOpenAccess
001033893 909CO $$ooai:juser.fz-juelich.de:1033893$$popenaire$$popen_access$$pVDB$$pdriver$$pdnbdelivery
001033893 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)175101$$aForschungszentrum Jülich$$b0$$kFZJ
001033893 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)188471$$aForschungszentrum Jülich$$b1$$kFZJ
001033893 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)129051$$aForschungszentrum Jülich$$b3$$kFZJ
001033893 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)129394$$aForschungszentrum Jülich$$b4$$kFZJ
001033893 9131_ $$0G:(DE-HGF)POF4-511$$1G:(DE-HGF)POF4-510$$2G:(DE-HGF)POF4-500$$3G:(DE-HGF)POF4$$4G:(DE-HGF)POF$$9G:(DE-HGF)POF4-5112$$aDE-HGF$$bKey Technologies$$lEngineering Digital Futures – Supercomputing, Data Management and Information Security for Knowledge and Action$$vEnabling Computational- & Data-Intensive Science and Engineering$$x0
001033893 9141_ $$y2024
001033893 915__ $$0StatID:(DE-HGF)0510$$2StatID$$aOpenAccess
001033893 920__ $$lyes
001033893 9201_ $$0I:(DE-Juel1)IAS-8-20210421$$kIAS-8$$lDatenanalyse und Maschinenlernen$$x0
001033893 9201_ $$0I:(DE-Juel1)IBG-1-20101118$$kIBG-1$$lBiotechnologie$$x1
001033893 980__ $$apreprint
001033893 980__ $$aVDB
001033893 980__ $$aUNRESTRICTED
001033893 980__ $$aI:(DE-Juel1)IAS-8-20210421
001033893 980__ $$aI:(DE-Juel1)IBG-1-20101118
001033893 9801_ $$aFullTexts