Include x-coding-assistant=open-interpreter header in litellm calls #1586
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Describe the changes you have made:
Proxies that inspect traffic between the development environment and an
LLM might be interested in whether it's OI or another tool calling in
order to be able to inspect and/or modify the payload.
The most common way of solving this would be to add a
user-agent
header.However, litellm which OI uses calls into OpenAI libraries directly
when making the request and it seems like the only way to set a custom
http_client
. This seemed like something that might have unforeseenconsequences (timeouts? retries?). For other LLMs, litellm seems to use
its own httpx wrapper which might presumably be easier to customize, but
I have not tried.
To make things easier, let's just add an OI specific header. I put the
string OI followed by the version there, but the value - and indeed the key
be able to to tell OI calls.
Reference any relevant issues (e.g. "Fixes #000"):
N/A
Pre-Submission Checklist (optional but appreciated):
docs/CONTRIBUTING.md
docs/ROADMAP.md
OS Tests (optional but appreciated):