Feature Request: Loading PeFT - LoRA adapters during runtime without prior merging #7788
Open
4 tasks done
Labels
enhancement
New feature or request
Prerequisites
Feature Description
Hello @ggerganov, this is a feature request to support LoRA adapters loading during runtime. In the current flow, we need to merge the adapter weights to the base model weights and convert them into GGUF to infer the model. But what if there is a way to independently convert the base model and the adapters to GGUF so that during runtime, the desired adapter could be mounted over the base model.
Motivation
This can hugely benefit memory-efficient computing not just for low-end developers but even for commercial startups who wish to build quick serverless applications. Many developers today train multiple adapters for a wide range of applications utilizing the same base model.
Possible Implementation
I am not a CPP guy but with the basic understanding that I have, I think there should be a way to independently convert the LoRA adapters to GGUF format considering the rank, alpha and other parameters used and which architecture it was used upon to train initially. Then during runtime, all we need to do is to mount these adapter parameters over the base model at specific layers.
The text was updated successfully, but these errors were encountered: