-
Notifications
You must be signed in to change notification settings - Fork 244
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bump version to v0.2.3 #1123
bump version to v0.2.3 #1123
Conversation
Hi @lvhan028 hold on please After #1085, this doesn't work as before. python3 -m lmdeploy serve api_server /workdir/workspace traceback
I have to set model-name explicitly like this. python3 -m lmdeploy serve api_server /workdir/workspace --model-name llama2 Is this an expected feature or a bug? If it’s a bug, may I fix it before releasing it? |
I can't reproduce your error, are you using the latest main code? According to your error, the error line is 111, but the code in main branch is at line 109
|
Yes, I use the latest main code. |
@irexyc use the llama 2 13b chat workspace please |
Found the reason. It was because my workspace folder had a name like vicuna so the |
Since the lmdeploy/lmdeploy/turbomind/turbomind.py Line 374 in 41dd740
We don't need raise the exception like this lmdeploy/lmdeploy/serve/async_engine.py Lines 107 to 109 in 41dd740
|
Fine, may I send the pr? It's up to you. |
I added |
e012ab5
to
5192d1d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
No description provided.