Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Serve VLM by gradio #1293

Merged
merged 9 commits into from
Mar 18, 2024
Merged

Serve VLM by gradio #1293

merged 9 commits into from
Mar 18, 2024

Conversation

irexyc
Copy link
Collaborator

@irexyc irexyc commented Mar 15, 2024

Motivation

add vl gradio demo based on VLAsyncEngine

Use cases (Optional)

lmdeploy serve gradio /nvme/chenxin/download/tmp/llava-v1.6-vicuna-7b/ --server-port 7008

@lvhan028 lvhan028 added the enhancement New feature or request label Mar 15, 2024
@lvhan028 lvhan028 changed the title add Vl gradio Serve VLM by gradio Mar 15, 2024
lmdeploy/cli/serve.py Outdated Show resolved Hide resolved
@lvhan028 lvhan028 merged commit 12ef4eb into InternLM:main Mar 18, 2024
3 of 5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants