Skip to content

Issues: vllm-project/vllm

[Roadmap] vLLM Roadmap Q2 2024
#3861 opened Apr 4, 2024 by simon-mo
Open 24
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Author
Filter by author
Label
Filter by label
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Milestones
Filter by milestone
Assignee
Filter by who’s assigned
Sort

Issues list

v0.4.3 Release Tracker release Related to new version release
#4895 opened May 18, 2024 by simon-mo
[Bug]: assert parts[0] == "base_model" AssertionError bug Something isn't working
#4883 opened May 17, 2024 by Edisonwei54
[RFC]: Add control panel support for vLLM RFC
#4873 opened May 17, 2024 by leiwen83
7 of 11 tasks
[Usage]: distributed inference with kuberay usage How to use vllm
#4865 opened May 16, 2024 by hetian127
[Bug]: No CUDA GPUs are available on 'CPU' use bug Something isn't working
#4858 opened May 16, 2024 by mcr-ksh
[Bug]: Running vllm docker image with neuron fails bug Something isn't working
#4836 opened May 15, 2024 by yaronr
[New Model]: Google's Paligemma family of models new model Requests to new models
#4833 opened May 15, 2024 by nfplay
[Usage]: Passing image to the vllm api endpoint usage How to use vllm
#4826 opened May 15, 2024 by davidramous
ProTip! Exclude everything labeled bug with -label:bug.