The GGUF model has undergone quantization, but unfortunately, its performance cannot be guaranteed. As a result, we have decided to downgrade it.
💡 Alternative Solution: You can use Cloud Deployment or Local Deployment [vLLM](If you have enough GPU resources) instead.
We appreciate your understanding and patience as we work to ensure the best possible experience.
We recommend using HuggingFace Inference Endpoints for fast deployment. We provide two docs for users to refer:
English version: GUI Model Deployment Guide
中文版: GUI模型部署教程
We recommend using vLLM for fast deployment and inference. You need to use vllm>=0.6.1
.
pip install -U transformers
VLLM_VERSION=0.6.6
CUDA_VERSION=cu124
pip install vllm==${VLLM_VERSION} --extra-index-url https://download.pytorch.org/whl/${CUDA_VERSION}
We provide three model sizes on Hugging Face: 2B, 7B, and 72B. To achieve the best performance, we recommend using the 7B-DPO or 72B-DPO model (based on your hardware configuration):
Run the command below to start an OpenAI-compatible API service:
python -m vllm.entrypoints.openai.api_server --served-model-name ui-tars --model <path to your model>
Note: VLM Base Url is OpenAI compatible API endpoints (see OpenAI API protocol document for more details).