001045982 001__ 1045982 001045982 005__ 20251023202106.0 001045982 0247_ $$2arXiv$$aarXiv:2503.08333 001045982 0247_ $$2doi$$a10.48550/arXiv.2503.08333 001045982 037__ $$aFZJ-2025-03641 001045982 088__ $$2arXiv$$aarXiv:2503.08333 001045982 1001_ $$0P:(DE-Juel1)188471$$aQuercia, Alessio$$b0$$ufzj 001045982 245__ $$a1LoRA: Summation Compression for Very Low-Rank Adaptation 001045982 260__ $$barXiv$$c2025 001045982 3367_ $$0PUB:(DE-HGF)25$$2PUB:(DE-HGF)$$aPreprint$$bpreprint$$mpreprint$$s1761201799_20500 001045982 3367_ $$2ORCID$$aWORKING_PAPER 001045982 3367_ $$028$$2EndNote$$aElectronic Article 001045982 3367_ $$2DRIVER$$apreprint 001045982 3367_ $$2BibTeX$$aARTICLE 001045982 3367_ $$2DataCite$$aOutput Types/Working Paper 001045982 520__ $$aParameter-Efficient Fine-Tuning (PEFT) methods have transformed the approach to fine-tuning large models for downstream tasks by enabling the adjustment of significantly fewer parameters than those in the original model matrices. In this work, we study the 'very low rank regime', where we fine-tune the lowest amount of parameters per linear layer for each considered PEFT method. We propose 1LoRA (Summation Low-Rank Adaptation), a compute, parameter and memory efficient fine-tuning method which uses the feature sum as fixed compression and a single trainable vector as decompression. Differently from state-of-the-art PEFT methods like LoRA, VeRA, and the recent MoRA, 1LoRA uses fewer parameters per layer, reducing the memory footprint and the computational cost. We extensively evaluate our method against state-of-the-art PEFT methods on multiple fine-tuning tasks, and show that our method not only outperforms them, but is also more parameter, memory and computationally efficient. Moreover, thanks to its memory efficiency, 1LoRA allows to fine-tune more evenly across layers, instead of focusing on specific ones (e.g. attention layers), improving performance further. 001045982 536__ $$0G:(DE-HGF)POF4-5112$$a5112 - Cross-Domain Algorithms, Tools, Methods Labs (ATMLs) and Research Groups (POF4-511)$$cPOF4-511$$fPOF IV$$x0 001045982 536__ $$0G:(DE-Juel1)HDS-LEE-20190612$$aHDS LEE - Helmholtz School for Data Science in Life, Earth and Energy (HDS LEE) (HDS-LEE-20190612)$$cHDS-LEE-20190612$$x1 001045982 588__ $$aDataset connected to arXivarXiv 001045982 650_7 $$2Other$$aComputer Vision and Pattern Recognition (cs.CV) 001045982 650_7 $$2Other$$aFOS: Computer and information sciences 001045982 7001_ $$0P:(DE-Juel1)199019$$aCao, Zhuo$$b1$$ufzj 001045982 7001_ $$0P:(DE-Juel1)184644$$aBangun, Arya$$b2$$ufzj 001045982 7001_ $$0P:(DE-Juel1)175101$$aPaul, Richard D.$$b3$$ufzj 001045982 7001_ $$0P:(DE-Juel1)151166$$aMorrison, Abigail$$b4$$ufzj 001045982 7001_ $$0P:(DE-Juel1)188313$$aAssent, Ira$$b5$$ufzj 001045982 7001_ $$0P:(DE-Juel1)129394$$aScharr, Hanno$$b6$$ufzj 001045982 773__ $$a10.48550/arXiv.2503.08333$$tarXiv$$y2025 001045982 909CO $$ooai:juser.fz-juelich.de:1045982$$pVDB 001045982 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)188471$$aForschungszentrum Jülich$$b0$$kFZJ 001045982 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)199019$$aForschungszentrum Jülich$$b1$$kFZJ 001045982 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)184644$$aForschungszentrum Jülich$$b2$$kFZJ 001045982 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)175101$$aForschungszentrum Jülich$$b3$$kFZJ 001045982 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)151166$$aForschungszentrum Jülich$$b4$$kFZJ 001045982 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)188313$$aForschungszentrum Jülich$$b5$$kFZJ 001045982 9101_ $$0I:(DE-588b)5008462-8$$6P:(DE-Juel1)129394$$aForschungszentrum Jülich$$b6$$kFZJ 001045982 9131_ $$0G:(DE-HGF)POF4-511$$1G:(DE-HGF)POF4-510$$2G:(DE-HGF)POF4-500$$3G:(DE-HGF)POF4$$4G:(DE-HGF)POF$$9G:(DE-HGF)POF4-5112$$aDE-HGF$$bKey Technologies$$lEngineering Digital Futures – Supercomputing, Data Management and Information Security for Knowledge and Action$$vEnabling Computational- & Data-Intensive Science and Engineering$$x0 001045982 9141_ $$y2025 001045982 920__ $$lyes 001045982 9201_ $$0I:(DE-Juel1)IAS-8-20210421$$kIAS-8$$lDatenanalyse und Maschinenlernen$$x0 001045982 9201_ $$0I:(DE-Juel1)IAS-6-20130828$$kIAS-6$$lComputational and Systems Neuroscience$$x1 001045982 980__ $$apreprint 001045982 980__ $$aVDB 001045982 980__ $$aI:(DE-Juel1)IAS-8-20210421 001045982 980__ $$aI:(DE-Juel1)IAS-6-20130828 001045982 980__ $$aUNRESTRICTED