diff --git a/docs/deployment/frameworks/streamlit.md b/docs/deployment/frameworks/streamlit.md index af0f0690c68e..c119878f137a 100644 --- a/docs/deployment/frameworks/streamlit.md +++ b/docs/deployment/frameworks/streamlit.md @@ -6,35 +6,33 @@ It can be quickly integrated with vLLM as a backend API server, enabling powerfu ## Prerequisites -- Setup vLLM environment - -## Deploy - -- Start the vLLM server with the supported chat completion model, e.g. +Set up the vLLM environment by installing all required packages: ```bash -vllm serve qwen/Qwen1.5-0.5B-Chat +pip install vllm streamlit openai ``` -- Install streamlit and openai: +## Deploy -```bash -pip install streamlit openai -``` +1. Start the vLLM server with a supported chat completion model, e.g. -- Use the script: + ```bash + vllm serve Qwen/Qwen1.5-0.5B-Chat + ``` -- Start the streamlit web UI and start to chat: +1. Use the script: -```bash -streamlit run streamlit_openai_chatbot_webserver.py +1. Start the streamlit web UI and start to chat: -# or specify the VLLM_API_BASE or VLLM_API_KEY -VLLM_API_BASE="http://vllm-server-host:vllm-server-port/v1" \ + ```bash streamlit run streamlit_openai_chatbot_webserver.py -# start with debug mode to view more details -streamlit run streamlit_openai_chatbot_webserver.py --logger.level=debug -``` + # or specify the VLLM_API_BASE or VLLM_API_KEY + VLLM_API_BASE="http://vllm-server-host:vllm-server-port/v1" \ + streamlit run streamlit_openai_chatbot_webserver.py + + # start with debug mode to view more details + streamlit run streamlit_openai_chatbot_webserver.py --logger.level=debug + ``` -![](../../assets/deployment/streamlit-chat.png) + ![Chat with vLLM assistant in Streamlit](../../assets/deployment/streamlit-chat.png)