Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
38 changes: 18 additions & 20 deletions docs/deployment/frameworks/streamlit.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,35 +6,33 @@ It can be quickly integrated with vLLM as a backend API server, enabling powerfu

## Prerequisites

- Setup vLLM environment

## Deploy

- Start the vLLM server with the supported chat completion model, e.g.
Set up the vLLM environment by installing all required packages:

```bash
vllm serve qwen/Qwen1.5-0.5B-Chat
pip install vllm streamlit openai
```

- Install streamlit and openai:
## Deploy

```bash
pip install streamlit openai
```
1. Start the vLLM server with a supported chat completion model, e.g.

- Use the script: <gh-file:examples/online_serving/streamlit_openai_chatbot_webserver.py>
```bash
vllm serve Qwen/Qwen1.5-0.5B-Chat
```

- Start the streamlit web UI and start to chat:
1. Use the script: <gh-file:examples/online_serving/streamlit_openai_chatbot_webserver.py>

```bash
streamlit run streamlit_openai_chatbot_webserver.py
1. Start the streamlit web UI and start to chat:

# or specify the VLLM_API_BASE or VLLM_API_KEY
VLLM_API_BASE="http://vllm-server-host:vllm-server-port/v1" \
```bash
streamlit run streamlit_openai_chatbot_webserver.py

# start with debug mode to view more details
streamlit run streamlit_openai_chatbot_webserver.py --logger.level=debug
```
# or specify the VLLM_API_BASE or VLLM_API_KEY
VLLM_API_BASE="http://vllm-server-host:vllm-server-port/v1" \
streamlit run streamlit_openai_chatbot_webserver.py

# start with debug mode to view more details
streamlit run streamlit_openai_chatbot_webserver.py --logger.level=debug
```

![](../../assets/deployment/streamlit-chat.png)
![Chat with vLLM assistant in Streamlit](../../assets/deployment/streamlit-chat.png)