Skip to content

Integarting RandLoRA a full rank PEFT algorithm #2441

@PaulAlbert31

Description

@PaulAlbert31

Feature request

Include RandLoRA as an option for PEFT
Code: https://github.com/PaulAlbert31/RandLoRA/tree/main/peft/src/peft/tuners/randlora
Paper: https://openreview.net/pdf?id=Hn5eoTunHN

Motivation

RandLoRA is a recently published paper https://openreview.net/forum?id=Hn5eoTunHN that challenges the limitations of LoRA's low-rank setting. RandLoRA combines random bases to create a full rank parameter-efficient updates. An custom gradient function is proposed to optimize memory use.

In summary:
RandLoRA can outperform LoRA for complex tasks (e.g. vision-language or LLM finetuning)
RandLoRA reduces memory usage over LoRA thanks to custom gradient functions that share non-trainable random bases across layers
RandLoRA's parameter count is dictated by the rank of the random basis.
RandLoRA can take slightly longer to train when scaling up the amount of trainable parameters.

To illustrate RandLoRA's capabilities, here are key results from the paper:

Image

CLIP results (vision-language classification) with GPU VRAM usage during training:

Image

Loss landscape connectivity of LoRA vs RandLoRA vs Std. Finetuning for CLIP. RandLoRA achieves a deeper minima than LoRA for equal trainable parameters.

Image

Your contribution

I have implemented RandLoRA as part of a local copy of the peft package https://github.com/PaulAlbert31/RandLoRA
More specifically: https://github.com/PaulAlbert31/RandLoRA/tree/main/peft/src/peft/tuners/randlora

I am happy to help or refactor part of the existing code to fit the current PEFT structure.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions