-
-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
vllm can't run peft model? #1129
Comments
vLLM does not support PEFT/LoRA yet, but it is on the Development Roadmap |
ok thanks! |
You have to merge adapter from peft with their base model first. |
@hllj Hi there, does this mean the merged model either saving on local or pushing to the hugging face will have the same structure as the base model which means will have similar files and versions as the base model in hugging face? |
Hi, does vllm runs peft models? I'm trying to run them but im getting an error that the config.json is not available in the model main forlder
The text was updated successfully, but these errors were encountered: