-
-
Notifications
You must be signed in to change notification settings - Fork 55
Open
Labels
Description
A user should be able to use deepseek r1 as their LLM for dockershrink.
By default, dockershrink will only try to connect to deepseek on 127.0.0.1 (using openai sdk).
(We assume that the user would like to run their own r1 locally)
However, they have the option to override the model's address.
Adding r1 means dockershrink gets SOTA code optimization and is basically free (local deepseek + dockershrink) 🚀
Usage
# assumes address=http://localhost:11434 (ollama default)
dockershrink optimize --model deepseek-r1
dockershrink optimize --model deepseek-r1 --host localhost --port 8080Developer notes
A minimal example code that can connect to ollama and talk to deepseek (or basically any other model ollama serves, just change the model)
import os
import openai
client = openai.OpenAI(
base_url = 'http://localhost:11434/v1',
api_key="ollama",
)
response = client.chat.completions.create(
messages=[
{
"role": "user",
"content": "What is your name?",
}
],
model="deepseek-r1"
)
print(response.choices[0].message.content)Run deepseek-r1 on ollama
ollama serve
ollama pull deepseek-r1
ollama run deepseek-r1
Reactions are currently unavailable