forked from vllm-project/vllm
-
Notifications
You must be signed in to change notification settings - Fork 40
Insights: ROCm/vllm
Overview
-
0 Active issues
-
- 2 Merged pull requests
- 1 Open pull request
- 0 Closed issues
- 0 New issues
Loading
Could not load contribution data
Please try again later
Loading
2 Pull requests merged by 2 people
-
Upstream merge 2025 06 16
#580 merged
Jun 17, 2025 -
MI350 enablement for fp16 and fp8 models V0/V1
#576 merged
Jun 16, 2025
1 Pull request opened by 1 person
-
Update test-template.j2
#579 opened
Jun 16, 2025
4 Unresolved conversations
Sometimes conversations happen on old items that aren’t yet closed. Here is a list of all the Issues and Pull Requests with unresolved conversations.
-
[Bug]: An error occurred when deploying DeepSeek-R1-Channel-INT8 on two MI250*8 machines using Ray
#555 commented on
Jun 16, 2025 • 0 new comments -
[Bug]: Multi-GPU AMD Setup Hangs without NCCL_P2P_DISABLE=1 and --disable-custom-all-reduce
#390 commented on
Jun 19, 2025 • 0 new comments -
EXPERIMENTING WITH K8S // NO NEED TO MERGE // Rocm vllm ci fix nd k8 osci
#477 commented on
Jun 19, 2025 • 0 new comments -
Handling input dim size greater than 3 in tuned_gemm.py
#482 commented on
Jun 20, 2025 • 0 new comments