Skip to content

[FEATURE] Add support for Deepseek-r1 #37

@duaraghav8

Description

@duaraghav8

A user should be able to use deepseek r1 as their LLM for dockershrink.
By default, dockershrink will only try to connect to deepseek on 127.0.0.1 (using openai sdk).
(We assume that the user would like to run their own r1 locally)
However, they have the option to override the model's address.

Adding r1 means dockershrink gets SOTA code optimization and is basically free (local deepseek + dockershrink) 🚀

Usage

# assumes address=http://localhost:11434 (ollama default)
dockershrink optimize --model deepseek-r1

dockershrink optimize --model deepseek-r1 --host localhost --port 8080

Developer notes

A minimal example code that can connect to ollama and talk to deepseek (or basically any other model ollama serves, just change the model)

import os
import openai


client = openai.OpenAI(
        base_url = 'http://localhost:11434/v1',
        api_key="ollama",
)

response = client.chat.completions.create(
    messages=[
        {
            "role": "user",
            "content": "What is your name?",
        }
    ],
    model="deepseek-r1"
)

print(response.choices[0].message.content)

Run deepseek-r1 on ollama

ollama serve
ollama pull deepseek-r1

ollama run deepseek-r1

Metadata

Metadata

Assignees

No one assigned

    Labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions