Awesome-LoRA is a collection of state-of-the-art (SOTA), novel low-rank adaptation methods (papers, codes and datasets). Any other interesting papers and codes are welcome. Any problems, please contact jiyuheng2023@ia.ac.cn. If you find this repository useful to your research or work, it is really appreciated to star this repository. ✨
LoRA is an efficient finetuning technique proposed by Microsoft researchers to adapt large models to specific tasks and datasets.
Year | Title | Venue | Paper | Code |
---|---|---|---|---|
2022 | LoRA: Low-Rank Adaptation of Large Language Models | ICLR | Link | Link |
Year | Title | Venue | Paper | Code |
---|---|---|---|---|
2024 | A Survey on LoRA of Large Language Models | arXiv | Link | - |
Year | Title | Venue | Paper | Code | Keywords |
---|---|---|---|---|---|
2024 | ExPLoRA: Parameter-Efficient Extended Pre-Training to Adapt Vision Transformers under Domain Shifts | arXiv | Link | - | Domain Shifts; ViT; Self-Supervised Learning; |
2024 | RoSA: Accurate Parameter-Efficient Fine-Tuning via Robust Adaptation | ICML | Link | Link | Robust Adaptation; PCA; |
2024 | FouRA: Fourier Low Rank Adaptation | arXiv | Link | - | Fourier Learning; Diffusion Models; Image Generation; |
2024 | Trans-LoRA: towards data-free Transferable Parameter Efficient Finetuning | arXiv | Link | - | Transferable Module; Deployment; |
2024 | LoRA-drop: Efficient LoRA Parameter Pruning based on Output Evaluation | arXiv | Link | - | Parameter Pruning; Parameter Evaluation; |
2024 | LoRA-Pro: Are Low-Rank Adapters Properly Optimized? | arXiv | Link | - | Optimization Process; Equivalent Gradient; |
2024 | LoRA^2: Multi-Scale Low-Rank Approximations for Fine-Tuning Large Language Models | arXiv | Link | Link | Multi-Scale; Prune; Orthogonal Projection; |
2024 | PC-LoRA: Low-Rank Adaptation for Progressive Model Compression with Knowledge Distillation | arXiv | Link | - | Model Compression; Knowledge Distillation; |
2024 | Vera: Vector-based random matrix adaptation | ICLR | Link | - | Shared-LoRA; Trainable Vectors; |
2024 | LaMDA: Large Model Fine-Tuning via Spectrally Decomposed Low-Dimensional Adaptation | arXiv | Link | Link | Multi-Step Training; Trainable Vectors; |
2024 | Prompt Tuning Strikes Back: Customizing Foundation Models with Low-Rank Prompt Adaptation | arXiv | Link | - | Prompt-Tuning-based; |
2024 | ROSA: Random Subspace Adaptation for Efficient Fine-Tuning | arXiv | Link | Link | Random Subspace Adaptation; Robust Fine-Tuning |
2024 | LoRA-GA: Low-Rank Adaptation with Gradient Approximation | arXiv | Link | Link | Gradient Approximation; Convergence; |
2024 | Efficient Pareto Manifold Learning with Low-Rank Structure | ICML | Link | - | Multi-task learning; Pareto front; |
2024 | AutoLoRa: An Automated Robust Fine-Tuning Framework | ICLR | Link | Link | Robust Fine-Tuning; Adversarial Robustness; |
2024 | LoRA-XS: Low-Rank Adaptation with Extremely Small Number of Parameters | arXiv | Link | Link | scaling language models; SVD; |
2024 | Matrix-Transformation Based Low-Rank Adaptation (MTLoRA): A Brain-Inspired Method for Parameter-Efficient Fine-Tuning | arXiv | Link | Link | LPLMs; Geometric Structure; |
2024 | AutoLoRA: Automatically Tuning Matrix Ranks in Low-Rank Adaptation Based on Meta Learning | arXiv | Link | Link | Meta Learning; Rank-1 Matrix |
2024 | RankAdaptor: Hierarchical Dynamic Low-Rank Adaptation for Structural Pruned LLMs | arXiv | Link | - | Structural Pruning; Hierarchical Dynamic Rank Scheduling |
2024 | LoRA-Composer: Leveraging Low-Rank Adaptation for Multi-Concept Customization in Training-Free Diffusion Models | arXiv | Link | Link | Multiconcept Customization; Concept Injection Constraints |
2024 | Investigating Training Strategies and Model Robustness of Low-Rank Adaptation for Language Modeling in Speech Recognition | arXiv | Link | - | Memory-Efficient Learning; Robust Speech Recognition |
2024 | PRILoRA: Pruned and Rank-Increasing Low-Rank Adaptation | arXiv | Link | - | Pruned and Rank-Increasing |
2024 | LAMPAT: Low-Rank Adaption for Multilingual Paraphrasing Using Adversarial Training | AAAI | Link | Link | Unsupervised Multilingual Paraphrasing |
2024 | LoRETTA: Low-Rank Economic Tensor-Train Adaptation for Ultra-Low-Parameter Fine-Tuning of Large Language Models | arXiv | Link | Link | Tensor-Train Decomposition; Robust Fine-Tuning |
2024 | Derivative-Free Optimization for Low-Rank Adaptation in Large Language Models | arXiv | Link | Link | Enhance Robustness; Derivative-Free Optimization |
2024 | LORS: Low-rank Residual Structure for Parameter-Efficient Network Stackingg | CVPR | Link | - | Reduce Stacking Depth |
2024 | FedLoRA: When Personalized Federated Learning Meets Low-Rank Adaptation | ICLR | Link | Link | Personalized Federated Learning; Data Heterogeneity |
2024 | InfLoRA: Interference-Free Low-Rank Adaptation for Continual Learning | CVPR | Link | Link | Continual Learning; Interference-Free |
2024 | Compressible Dynamics in Deep Overparameterized Low-Rank Learning & Adaptation | ICML | Link | Link | Inherent Low-dimensional Structures of Data; Compressible Dynamics within The Model Parameters; Overparameterization |
2024 | FLORA: Low-Rank Adapters Are Secretly Gradient Compressors | ICML | Link | Link | High-Rank Updates; Sublinear Space Complexity of Optimization States |
2024 | MiLoRA: Harnessing Minor Singular Components for Parameter-Efficient LLM Finetuning | arXiv | Link | - | Robust Fine-Tuning; Adversarial Robustness |
2024 | Expressive and Generalizable Low-rank Adaptation for Large Models via Slow Cascaded Learning | arXiv | Link | Link | Cascaded Learning Strategy; Robust Fine-Tuning |
2024 | LoraRetriever: Input-Aware LoRA Retrieval and Composition for Mixed Tasks in the Wild | arXiv | Link | - | Retrieval and Composition; Mixed Tasks |
2024 | Riemannian Preconditioned LoRA for Fine-Tuning Foundation Models | arXiv | Link | Link | R×R Preconditioner; Robust Fine-Tuning |
2024 | CoLoRA: Continuous low-rank adaptation for reduced implicit neural modeling of parameterized partial differential equations | arXiv | Link | Link | Predicting Speed; Robust Fine-Tuning |
2024 | CorDA: Context-Oriented Decomposition Adaptation of Large Language Models | arXiv | Link | Link | Context-Oriented Decomposition; Robust Fine-Tuning |
2024 | LoRAP: Transformer Sub-Layers Deserve Differentiated Structured Compression for Large Language Models | ICML | Link | - | Low-Rank Matrix Approximation; Structured Pruning |
2024 | Asymmetry in Low-Rank Adapters of Foundation Models | arXiv | Link | Link | Unexpected Asymmetry In the Importance of Low-Rank Adapter Matrices |
2024 | SAML: Speaker Adaptive Mixture of LoRA Experts for End-to-End ASR | arXiv | Link | Link | Mixture-Of-Experts(MoE); Speaker Adaptation |
2024 | Dataset Size Recovery from LoRA Weights | arXiv | Link | Link | Dataset Size Recovery |
2024 | Towards Federated Low-Rank Adaptation with Rank-Heterogeneous Communication | arXiv | Link | - | Replication-Based Padding Strategy; Federated Learning |
2024 | Retrieval-Augmented Mixture of LoRA Experts for Uploadable Machine Learning | arXiv | Link | - | Heterogeneous Requests; Uploadable Machine Learning (UML) |
2024 | Bayesian-LoRA: LoRA based Parameter Efficient Fine-Tuning using Optimal Quantization levels and Rank Values trough Differentiable Bayesian Gates | arXiv | Link | - | Bayesian; Robust Fine-Tuning |
2024 | Mixture-of-Subspaces in Low-Rank Adaptation | arXiv | Link | Link | Mixtureof-Subspaces; Robust Fine-Tuning |
2024 | ExPLoRA: Parameter-Efficient Extended Pre-Training to Adapt Vision Transformers under Domain Shifts | arXiv | Link | - | VIT; Unsupervised Pre-Training; Supervised Learning |
2024 | ShareLoRA: Parameter Efficient and Robust Large Language Model Fine-tuning via Shared Low-Rank Adaptation | arXiv | Link | - | Shared; transfer learning |
2024 | ALoRA: Allocating Low-Rank Adaptation for Fine-tuning Large Language Models | arXiv | Link | - | Allocating; Structural Pruning |
2024 | ResLoRA: Identity Residual Mapping in Low-Rank Adaption | arXiv | Link | Link | Residual Paths |
2024 | RST-LoRA: A Discourse-Aware Low-Rank Adaptation for Long Document Abstractive Summarization | arXiv | Link | - | Rhetorical Structure Theory (RST); Long Document |
2024 | Federated LoRA with Sparse Communication | arXiv | Link | Link | Communication-Efficiency in Federated LoRA |
2024 | RoseLoRA: Row and Column-wise Sparse Low-rank Adaptation of Pre-trained Language Model for Knowledge Editing and Fine-tuning | arXiv | Link | - | The Sparsity with Respective to The Matrix Product |
2024 | Task-Aware Low-Rank Adaptation of Segment Anything Model | arXiv | Link | - | Segment Anything Model (SAM); Multi-Task Learning |
2024 | Relora: High-rank training through low-rank updates | ICLR | Link | Link | Low-Rank Updates |
2024 | Low-Rank Few-Shot Adaptation of Vision-Language Models | CVPR | Link | Link | VisionLanguage Models (VLMs); Few-Shot |
2024 | MTLoRA: A Low-Rank Adaptation Approach for Efficient Multi-Task Learning | CVPR | Link | Link | Multi-Task Learning (MTL); Pareto-Optimal Trade-Off |
2024 | QA-LoRA: Quantization-Aware Low-Rank Adaptation of Large Language Models | arXiv | Link | Link | Quantization and Adaptation; Group-Wise Operators |
2024 | Defending Against Weight-Poisoning Backdoor Attacks for Parameter-Efficient Fine-Tuning | arXiv | Link | - | Security Vulnerabilities; Poisoned Sample Identification Module (PSIM) |
2024 | Mixture-of-LoRAs: An Efficient Multitask Tuning for Large Language Models | COLING | Link | - | Mixture-of-LoRAs; Robust Fine-Tuning |
2024 | LaMDA: Large Model Fine-Tuning via Spectrally Decomposed Low-Dimensional Adaptation | arXiv | Link | Link | Projection Matrix (PM); Lite-Weight |
2024 | Accurate LoRA-Finetuning Quantization of LLMs via Information Retention | ICML | Link | Link | Quantization; Information Retention; |
2024 | Quantum-informed Tensor Adaptation (QuanTA): Efficient High-Rank Fine-Tuning of Large Language Models | arXiv | Link | Link | Quantum-informed Tensor Adaptation (QuanTA) |
2024 | VB-LoRA: Extreme Parameter Efficient Fine-Tuning with Vector Banks | arXiv | Link | Link | Shared Vector Bank |
2024 | MoRA: High-Rank Updating for Parameter-Efficient Fine-Tuning | arXiv | Link | Link | High-Rank Updating; Non-Parameter Operators |
2024 | FLoRA: Low-Rank Core Space for N-dimension | arXiv | Link | Link | N-Dimensional Parameter Space |
2024 | LOFIT: Localized Fine-tuning on LLM Representations | arXiv | Link | Link | Localized Fine-Tuning |
2024 | Visual Perception by Large Language Model's Weights | arXiv | Link | - | Visual Perception |
2024 | Memory-Space Visual Prompting for Efficient Vision-Language Fine-Tuning | ICML | Link | Link | Vision-Lnguage (VL); Memoryspace Visual Prompting (MemVP) |
2024 | AdvLoRA: Adversarial Low-Rank Adaptation of Vision-Language Models | arXiv | Link | - | Robust Fine-Tuning; Adversarial Robustness; Vision-Language Models; Clustering; |
2024 | Parameter-Efficient Fine-Tuning with Discrete Fourier Transform | ICML | Link | Link | Discrete Fourier Transform |
2024 | LoNAS: Elastic Low-Rank Adapters for Efficient Large Language | COLING | Link | Link | Neural Architecture Search; Parameter-Efficient Fine-Tuning |
2024 | LoRA Learns Less and Forgets Less | arXiv | Link | - | Robust Fine-Tuning; Adversarial Robustness |
2024 | LoRA+: Efficient Low Rank Adaptation of Large Models | arXiv | Link | Link | Efficient Fine-Tuning |
2024 | PeriodicLoRA: Breaking the Low-Rank Bottleneck in LoRA Optimization | arXiv | Link | - | Low-Rank Bottleneck |
2024 | Sparse Matrix in Large Language Model Fine-tuning | arXiv | Link | - | Sparse Matrix Tuning (SMT); Robust Fine-Tuning |
2024 | Derivative-Free Optimization for Low-Rank Adaptation in Large Language Models | arXiv | Link | Link | Derivative-Free Optimization; Robust Fine-Tuning |
2024 | Multi-LoRA Composition for Image Generation | arXiv | Link | Link | Multi-LoRA Composition; Text-to-Image Models |
2024 | BiLoRA: A Bi-level Optimization Framework for Overfitting-Resilient Low-Rank Adaptation of Large Pre-trained Models | arXiv | Link | - | Bi-Level Optimization (BLO); Robust Fine-Tuning |
2024 | AFLoRA: Adaptive Freezing of Low Rank Adaptation in Parameter Efficient Fine-Tuning of Large Models | arXiv | Link | - | Adaptive Freezing; Robust Fine-Tuning |
2024 | LoRA Meets Dropout under a Unified Framework | arXiv | Link | - | HiddenKey; Dropout; Robust Fine-Tuning |
2024 | Galore: Memory-efficient llm training by gradient low-rank projection | ICML | Link | Link | Gradient Low-Rank Projection (GaLore);Robust Fine-Tuning |
2024 | Let's Focus on Neuron: Neuron-Level Supervised Fine-tuning for Large Language Model | arXiv | Link | - | Neuron-Level Fine-Tuning (NeFT); Robust Fine-Tuning |
2024 | LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuning | arXiv | Link | - | Layerwise Importance Sampled AdamW (LISA); Robust Fine-Tuning |
2023 | Efficient Low-rank Backpropagation for Vision Transformer Adaptation | NeurIPS | Link | Link | vision transformers (ViT); Robust Fine-Tuning |
2023 | Delta-LoRA: Fine-Tuning High-Rank Parameters with the Delta of Low-Rank Matrices | arXiv | Link | - | Robust Fine-Tuning; Adversarial Robustness |
2023 | DyLoRA: Parameter-Efficient Tuning of Pre-trained Models using Dynamic Search-Free Low-Rank Adaptation | EACL | Link | Link | SVD Modules; Pretrained Models (PMs); Robust Fine-Tuning |
2023 | The expressive power of low-rank adaptation | ICLR | Link | Link | THE EXPRESSIVE POWER |
2023 | Exploring the impact of low-rank adaptation on the performance, efficiency, and regularization of RLHF | arXiv | Link | Link | RLHF; Robust Fine-Tuning |
2023 | Deep Learning Model Compression With Rank Reduction in Tensor Decomposition | TNNLS | Link | - | Rank Reduction in Tensor Decomposition; Robust Fine-Tuning |
2023 | Loramoe: Revolutionizing mixture of experts for maintaining world knowledge in language model alignment | arXiv | Link | - | Supervised fine-tuning (SFT); Mixture of Experts (MoE); Robust Fine-Tuning |
2023 | Bayesian Low-rank Adaptation for Large Language Models | ICLR | Link | Link | Laplace approximation; Robust Fine-Tuning |
2023 | Lora-fa: Memory-efficient low-rank adaptation for large language models fine-tuning | arXiv | Link | - | Memory of Large Language Models; Robust Fine-Tuning |
2023 | Motion Style Transfer: Modular Low-Rank Adaptation for Deep Motion Forecasting | PMLR | Link | Link | Motion Forecasting; Distribution Shifts; Transfer Learning |
2023 | Sparse low-rank adaptation of pre-trained language models | EMNLP | Link | Link | Sparse Low-Rank; Robust Fine-Tuning |
2023 | Low-Rank Adaptation of Large Language Model Rescoring for Parameter-Efficient Speech Recognition | ASRU | Link | - | Parameter-Efficient Speech Recognition |
2023 | SiRA: Sparse Mixture of Low Rank Adaptation | arXiv | Link | - | Sparse Mixture of Expert(SMoE); Robust Fine-Tuning |
2021 | Compacter: Efficient low-rank hypercomplex adapter layers | NeurIPS | Link | Link | |
2022 | LoRA: Low-Rank Adaptation of Large Language Models | ICLR | Link | Link | The Pioneering Paper |
Huggingface PEFT Link