-
-
Notifications
You must be signed in to change notification settings - Fork 6.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: vLLM running on Unspecified Platform raises NotImplementedError when using podman/docker-compose #14954
Comments
i'm also facing same issues |
`INFO 03-17 06:04:15 init.py:211] No platform detected, vLLM is running on UnspecifiedPlatform |
+1 |
HI @BastianBN If you cannot run You can follow the https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html to config the podman, and then ensure the sample cuda workload can be run successfully. And then the |
Hello @yankay |
It seems the same as containers/podman#25196. If it's a podman issue, it's better to discussit in the Podman issue. |
No, this is not a podman only issue. I use plain docker / docker compose and have run into the same error. |
Your current environment
🐛 Describe the bug
Hi,
I'm trying to run vLLM through podman-compose using the following docker_compose file but I get an "Unspecified platform" message and the pods crashes on startup after raising a NotImplementedError.
I have the same error whether using docker-compose or podman-compose as backends, using the right GPU definition for each (deploy: or device:)
I'm running all of this on a GCP Rocky Linux 9.5 VM. Also, it does work normally (CUDA is detected) when I run the container using
podman run -d --name model2 --gpus all --ipc=host -p 8002:8000 --network monitoring-net vllm/vllm-openai:latest --model Qwen/Qwen2.5-Coder-3B-Instruct --gpu-memory-utilization 0.4 --api-key "<...>" --max-model-len 8192 --max-num-seq 64
I made a debug container that doesn't exit the pod when vllm crashes and tried
nvidia-smi
in but it told me the command doesn't exist, which feels somewhat weird ? I don't know what I can testdocker_compose.yml
Python/vLLM logs
Before submitting a new issue...
The text was updated successfully, but these errors were encountered: