Skip to content

Low-Rank adapter extraction for fine-tuned transformers model

License

Notifications You must be signed in to change notification settings

thomasgauthier/LoRD

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 

Repository files navigation

Note

LoRD related code has a new home at mergekit. Extract any LoRA with a simple command like mergekit-extract-lora 'teknium/OpenHermes-2.5-Mistral-7B' 'mistralai/Mistral-7B-v0.1' 'extracted_OpenHermes-2.5-LoRA_output_path' --rank=32

LoRD: Low-Rank Decomposition of finetuned Large Language Models

This repository contains code for extracting LoRA adapters from finetuned transformers models, using Singular Value Decomposition (SVD).

LoRA (Low-Rank Adaptation) is a technique for parameter-efficient fine-tuning of large language models. The technique presented here allows extracting PEFT compatible Low-Rank adapters from full fine-tunes or merged model.

Getting started

Everything you need to extract and publish your LoRA adapter is available in the LoRD.ipynb notebook.

Running the notebook on Colab is the easiest way to get started.

Special thanks

Thanks to @kohya_ss for their prior work on LoRA extraction for Stable Diffusion.

About

Low-Rank adapter extraction for fine-tuned transformers model

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published