Qwen2.5-72B-Instruct-Q4_K_M running with llama.cpp
curl -fsSL https://raw.githubusercontent.com/sulla-ai/runpod-installation-scripts/main/install-llama-cpp-Qwen2.5-72B-Instruct-Q4_K_M.sh | bashQwen2.5-72B-Instruct-Q6_K running with llama.cpp
curl -fsSL https://raw.githubusercontent.com/sulla-ai/runpod-installation-scripts/main/install-llama-cpp-Qwen2.5-72B-Instruct-Q6_K.sh | bashQwen3.5-27B-UD-Q4_K_XL.gguf.sh
curl -fsSL https://raw.githubusercontent.com/sulla-ai/runpod-installation-scripts/main/Qwen3.5-27B-UD-Q4_K_XL.gguf.sh | bashQwen3.5-35B-A3B running with llama.cpp
curl -fsSL https://raw.githubusercontent.com/sulla-ai/runpod-installation-scripts/main/install-llama-cpp-Qwen3.5-35B-A3B.sh | bashAfter launch:
tail -f /tmp/vllm.log
curl http://127.0.0.1:8000/v1/models