Skip to content

Add OpenAIRecorder #91

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Jun 25, 2025
Merged

Add OpenAIRecorder #91

merged 4 commits into from
Jun 25, 2025

Conversation

doringeman
Copy link
Contributor

@doringeman doringeman commented Jun 24, 2025

Adds recording functionality for OpenAI inference requests and responses:

  • record last 10 request/response pairs per model
  • convert streaming responses to single JSON format
  • add GET /engines/requests?model=<model> endpoint

Necessary work left to be done:

  • remove records on model eviction (2nd commit)
  • include the configured context size for the model (3rd commit)
$ MODEL_RUNNER_PORT=8080 make run # in a separate terminal

$ curl http://localhost:8080/engines/v1/chat/completions -X POST -H "Content-Type: application/json" -d '{
   "model": "ai/smollm2",
   "messages": [
     {"role": "user", "content": "Capital of Romania?"}
   ]
 }'
{"choices":[{"finish_reason":"stop","index":0,"message":{"role":"assistant","content":"Romania's capital city is Bucharest."}}],"created":1750775388,"model":"ai/smollm2","system_fingerprint":"b1-1e8659e","object":"chat.completion","usage":{"completion_tokens":10,"prompt_tokens":33,"total_tokens":43},"id":"chatcmpl-T9Pa73gpAAAcgXp2EzckiZ0fTjUJtEIk","timings":{"prompt_n":33,"prompt_ms":34.326,"prompt_per_token_ms":1.040181818181818,"prompt_per_second":961.3703897919944,"predicted_n":10,"predicted_ms":59.546,"predicted_per_token_ms":5.9546,"predicted_per_second":167.937392939912}}

$ MODEL_RUNNER_HOST=http://localhost:8080 docker model run ai/smollm2 hi
Hello! How can I help you today?

$ curl -s http://localhost:8080/engines/requests\?model\=ai/smollm2 | jq .
{
  "count": 2,
  "model": "ai/smollm2",
  "records": [
    {
      "id": "ai/smollm2_1750775387439166000",
      "model": "ai/smollm2",
      "method": "POST",
      "url": "/engines/v1/chat/completions",
      "request": "{\n    \"model\": \"ai/smollm2\",\n    \"messages\": [\n      {\"role\": \"user\", \"content\": \"Capital of Romania?\"}\n    ]\n  }",
      "response": "{\"choices\":[{\"finish_reason\":\"stop\",\"index\":0,\"message\":{\"role\":\"assistant\",\"content\":\"Romania's capital city is Bucharest.\"}}],\"created\":1750775388,\"model\":\"ai/smollm2\",\"system_fingerprint\":\"b1-1e8659e\",\"object\":\"chat.completion\",\"usage\":{\"completion_tokens\":10,\"prompt_tokens\":33,\"total_tokens\":43},\"id\":\"chatcmpl-T9Pa73gpAAAcgXp2EzckiZ0fTjUJtEIk\",\"timings\":{\"prompt_n\":33,\"prompt_ms\":34.326,\"prompt_per_token_ms\":1.040181818181818,\"prompt_per_second\":961.3703897919944,\"predicted_n\":10,\"predicted_ms\":59.546,\"predicted_per_token_ms\":5.9546,\"predicted_per_second\":167.937392939912}}",
      "timestamp": "2025-06-24T17:29:47.43917+03:00",
      "status_code": 200
    },
    {
      "id": "ai/smollm2_1750775394431273000",
      "model": "ai/smollm2",
      "method": "POST",
      "url": "/engines/v1/chat/completions",
      "request": "{\"model\":\"ai/smollm2\",\"messages\":[{\"role\":\"user\",\"content\":\"hi\"}],\"stream\":true}",
      "response": "{\"choices\":[{\"finish_reason\":\"stop\",\"index\":0,\"message\":{\"content\":\"Hello! How can I help you today?\",\"role\":\"assistant\"}}],\"created\":1750775394,\"id\":\"chatcmpl-xJpM5U2j8AtQaHphIMXkRC6EaFp7dDdl\",\"model\":\"ai/smollm2\",\"object\":\"chat.completion\",\"system_fingerprint\":\"b1-1e8659e\",\"timings\":{\"predicted_ms\":57.182,\"predicted_n\":10,\"predicted_per_second\":174.88020705816513,\"predicted_per_token_ms\":5.7182,\"prompt_ms\":46.706,\"prompt_n\":7,\"prompt_per_second\":149.87367790005567,\"prompt_per_token_ms\":6.672285714285715},\"usage\":{\"completion_tokens\":10,\"prompt_tokens\":30,\"total_tokens\":40}}",
      "timestamp": "2025-06-24T17:29:54.431274+03:00",
      "status_code": 200
    }
  ]
}

With the backend configuration included (the 3rd commit):

$ MODEL_RUNNER_PORT=8080 make run # in a separate terminal

$ cat dc.yaml
services:
  model1:
    provider:
      type: model
      options:
        model: ai/smollm2
        context-size: 8192
        runtime-flags: "--no-prefill-assistant"

$ MODEL_RUNNER_HOST=http://localhost:8080 docker compose -f dc.yaml --progress plain up

$ MODEL_RUNNER_HOST=http://localhost:8080 docker model run ai/smollm2 hi
Hi there, I'm SmolLM. I'm here to help with any questions or issues related to Natural Language Processing (NLP) or machine learning. What can I help you with today?

$ curl -s http://localhost:8080/engines/requests\?model\=ai/smollm2 | jq .
{
  "config": {
    "context_size": 8192,
    "flags": [
      "--no-prefill-assistant"
    ]
  },
  "count": 1,
  "model": "ai/smollm2",
  "records": [
    {
      "id": "ai/smollm2_1750852528701065000",
      "model": "ai/smollm2",
      "method": "POST",
      "url": "/engines/v1/chat/completions",
      "request": "{\"model\":\"ai/smollm2\",\"messages\":[{\"role\":\"user\",\"content\":\"hi\"}],\"stream\":true}",
      "response": "{\"choices\":[{\"finish_reason\":\"stop\",\"index\":0,\"message\":{\"content\":\"Hi there, I'm SmolLM. I'm here to help with any questions or issues related to Natural Language Processing (NLP) or machine learning. What can I help you with today?\",\"role\":\"assistant\"}}],\"created\":1750852530,\"id\":\"chatcmpl-298syWcQ7fC0v02baVJ2fyFXO5UtI9Fg\",\"model\":\"ai/smollm2\",\"object\":\"chat.completion\",\"system_fingerprint\":\"b1-1e8659e\",\"timings\":{\"predicted_ms\":322.368,\"predicted_n\":41,\"predicted_per_second\":127.18383958705579,\"predicted_per_token_ms\":7.862634146341463,\"prompt_ms\":33.975,\"prompt_n\":30,\"prompt_per_second\":883.0022075055188,\"prompt_per_token_ms\":1.1325},\"usage\":{\"completion_tokens\":41,\"prompt_tokens\":30,\"total_tokens\":71}}",
      "timestamp": "2025-06-25T14:55:28.701065+03:00",
      "status_code": 200
    }
  ]
}

With the User-Agent captured in records if it's not empty (the 4th commit):

$ MODEL_RUNNER_PORT=8080 make run # in a separate terminal

$ # no User-Agent for the following curl:
$ curl -A "" http://localhost:8080/engines/v1/chat/completions ...
...

$ curl http://localhost:8080/engines/v1/chat/completions ...
...

$ MODEL_RUNNER_HOST=http://localhost:8080 docker model run ai/smollm2 hi
Hello! How can I help you today?

$ curl -s http://localhost:8080/engines/requests\?model\=ai/smollm2 | jq '.records[] | {request: .request, user_agent: .user_agent}'
{
  "request": "{\n    \"model\": \"ai/smollm2\",\n    \"messages\": [\n      {\"role\": \"user\", \"content\": \"Capital of Romania?\"}\n    ]\n  }",
  "user_agent": null
}
{
  "request": "{\n    \"model\": \"ai/smollm2\",\n    \"messages\": [\n      {\"role\": \"user\", \"content\": \"Capital of Romania?\"}\n    ]\n  }",
  "user_agent": "curl/8.7.1"
}
{
  "request": "{\"model\":\"ai/smollm2\",\"messages\":[{\"role\":\"user\",\"content\":\"hi\"}],\"stream\":true}",
  "user_agent": "docker-model-cli/dev"
}

Signed-off-by: Dorin Geman <dorin.geman@docker.com>
@doringeman doringeman requested a review from a team June 24, 2025 14:36
Copy link
Collaborator

@xenoscopic xenoscopic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Only thing I can think of is maybe we also want to record User-Agent in case the user has multiple components interacting with the model runner (e.g. in an agentic app) and wants to be able to distinguish between them.

Copy link
Contributor

@ilopezluna ilopezluna left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice! I added a couple of comments, nothing blocking, they can be addressed in follow up PRs if needed

}
}

func (r *OpenAIRecorder) convertStreamingResponse(streamingBody string) string {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we could use the OpenAI Go SDK to get this conversion.
They have the acc := openai.ChatCompletionAccumulator{} which might be useful:
https://github.com/openai/openai-go/blob/main/examples/chat-completion-accumulating/main.go

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can do that, but I would still need to split the full body on chunks and pass them separately to the accumulator as it is designed to work with individual streaming chunks 🤔 .

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I used the sdk to convert chunks into a full response in the past, but I actually don't remember if I used the accumulator.
In any case, not a blocker so we can revisit eventually

Signed-off-by: Dorin Geman <dorin.geman@docker.com>
Signed-off-by: Dorin Geman <dorin.geman@docker.com>
@doringeman doringeman marked this pull request as ready for review June 25, 2025 11:57
Signed-off-by: Dorin Geman <dorin.geman@docker.com>
@doringeman
Copy link
Contributor Author

Only thing I can think of is maybe we also want to record User-Agent in case the user has multiple components interacting with the model runner (e.g. in an agentic app) and wants to be able to distinguish between them.

Good point, thanks @xenoscopic!
Done in 3a55005.

Copy link
Member

@p1-0tr p1-0tr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

UserAgent string `json:"user_agent,omitempty"`
}

type ModelData struct {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit OpenAIRecord

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree it's a bit confusing, but the backend configuration are backend specific, not OpenAI specific.

}
}

func (r *OpenAIRecorder) convertStreamingResponse(streamingBody string) string {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I used the sdk to convert chunks into a full response in the past, but I actually don't remember if I used the accumulator.
In any case, not a blocker so we can revisit eventually

@doringeman doringeman merged commit a6db262 into docker:main Jun 25, 2025
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants