Skip to content

Mictern/llm-relay-Xapi

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 

Repository files navigation

GLM OpenAI Forwarder

A local forwarding service that lets you call your upstream model API with standard OpenAI-style requests, without setting extra_headers on every call.

It forwards /v1/* requests to your upstream endpoint and automatically injects:

  • X-Api-Key: <upstream.x_api_key>

1. Setup

cd /mnt/d/Github/glm-openai-forwarder
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
cp config.example.toml config.toml

Edit config.toml:

[server]
host = "127.0.0.1"
port = 8010

[upstream]
base_url = "http://7.150.8.95:28082/glm5/v1"
x_api_key = "sk-your-upstream-x-api-key"
timeout_seconds = 300

Run:

./run.sh

Health check:

curl -s http://127.0.0.1:8010/healthz

Optional: use a non-default config path via env:

FORWARDER_CONFIG=/path/to/my-forwarder.toml ./run.sh

2. Standard client call (no extra_headers)

import os
from openai import OpenAI

# optional: disable system proxy
os.environ["http_proxy"] = ""
os.environ["https_proxy"] = ""

client = OpenAI(
    api_key="sk-your-openai-style-api-key",  # sent as Authorization: Bearer ...
    base_url="http://127.0.0.1:8010/v1",
)

completion = client.chat.completions.create(
    model="glmmoedsa",
    messages=[{"role": "user", "content": "hi"}],
)

print(completion.choices[0].message.content)

3. Notes

  • Keep upstream.base_url with /v1 suffix to match path forwarding.
  • The proxy keeps your original Authorization header and adds X-Api-Key automatically.
  • Streaming requests (stream=True) are supported.

About

llm Xapi distribution

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors