Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

vllm can't run peft model? #1129

Open
Stealthwriter opened this issue Sep 21, 2023 · 4 comments
Open

vllm can't run peft model? #1129

Stealthwriter opened this issue Sep 21, 2023 · 4 comments

Comments

@Stealthwriter
Copy link

Hi, does vllm runs peft models? I'm trying to run them but im getting an error that the config.json is not available in the model main forlder

@viktor-ferenczi
Copy link
Contributor

vLLM does not support PEFT/LoRA yet, but it is on the Development Roadmap

Related: #289, #815

@Stealthwriter
Copy link
Author

vLLM does not support PEFT/LoRA yet, but it is on the Development Roadmap

Related: #289, #815

ok thanks!

@hllj
Copy link

hllj commented Mar 2, 2024

You have to merge adapter from peft with their base model first.
This is my codebase for merging you can see here: https://github.com/Reasoning-Lab/Elementary-Math-Solving-Zalo-AI-2023/blob/79ed4742d91755b4a00fadd0079279394222928b/merge_peft_adapter.py
After that you will have your merged model and you can serve with vLLM from output_dir.

@rsong0606
Copy link

@hllj Hi there, does this mean the merged model either saving on local or pushing to the hugging face will have the same structure as the base model which means will have similar files and versions as the base model in hugging face?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants