Skip to content

Help Wanted: Community Provider Testing & Contributions / 征集社区提供商测试与贡献 #1

@StringKe

Description

@StringKe

Help Wanted: Community Provider Testing & Contributions

English

Claudex supports routing Claude Code to multiple AI providers through its translation proxy. However, we don't have API keys for every platform and can't test all providers ourselves.

What We Need

We're looking for community members who can help with:

  1. Testing existing providers — Verify that profiles in config.example.toml work correctly with real API keys
  2. Adding new providers — Contribute profile configs and translation fixes for untested platforms
  3. OAuth subscription testing — Test claudex auth login with different subscription plans (ChatGPT Plus/Pro, Gemini Pro, GitHub Copilot, etc.)
  4. Fixing translation edge cases — Some providers have slight API differences that need special handling in src/proxy/translation.rs

Currently Tested (v0.1.2)

Provider Route Status Notes
OpenRouter → Claude openrouter-claude ✅ Working Correctly identifies as Claude by Anthropic
OpenRouter → GPT openrouter-gpt ✅ Working Response via Claude Code system prompt
OpenRouter → Gemini openrouter-gemini ✅ Working Correctly identifies as Gemini by Google
OpenRouter → DeepSeek openrouter-deepseek ✅ Working Correctly identifies as DeepSeek-V3
OpenRouter → Grok openrouter-grok ✅ Working Correctly identifies as Grok by xAI
OpenRouter → Qwen openrouter-qwen ✅ Working Correctly identifies as Qwen3 by Alibaba
OpenRouter → Llama openrouter-llama ✅ Working Correctly identifies model as llama-4-maverick
MiniMax (Claude proxy) minimax ✅ Working DirectAnthropic passthrough
OpenAI (direct API) openai ✅ Working Requires max_tokens = 16384 in profile config
Z.AI / GLM (direct API) glm ✅ Working GLM-4.6 via api.z.ai
Kimi / Moonshot (direct API) kimi ✅ Working Kimi-K2 via api.moonshot.ai
Ollama (local) ollama ✅ Working Tested with gpt-oss:20b, localhost:11434
OpenAI (Codex CLI OAuth) codex-sub ✅ Working OAuth subscription, auto token refresh
Anthropic (direct API) anthropic ❌ Key Issue Organization disabled (not a Claudex bug)

Needs Testing / Contributions

Ranked by developer adoption (based on real-world usage data).

High Priority — Most requested by developers, large user base:

Provider Type Base URL Notes
Groq OpenAICompatible https://api.groq.com/openai/v1 Ultra-fast inference, free tier available
Mistral OpenAICompatible https://api.mistral.ai/v1 Open-weight models, strong in EU market
Google AI Studio / Gemini OpenAICompatible https://generativelanguage.googleapis.com/v1beta/openai Direct Gemini API (not via OpenRouter)
Azure OpenAI OpenAICompatible https://{resource}.openai.azure.com/openai/deployments/{model}/v1 Enterprise deployments
Perplexity AI OpenAICompatible https://api.perplexity.ai Search-augmented generation

Medium Priority — Growing adoption, active developer communities:

Provider Type Base URL Notes
Cohere OpenAICompatible https://api.cohere.com/v2 Command R+, RAG-focused
Cerebras OpenAICompatible https://api.cerebras.ai/v1 Ultra-fast inference hardware
Together AI OpenAICompatible https://api.together.xyz/v1 Open model hosting, fine-tuning
Fireworks AI OpenAICompatible https://api.fireworks.ai/inference/v1 Fast open model inference
GitHub Models OpenAICompatible https://models.inference.ai.azure.com Free tier with GitHub account

Low Priority — Niche or region-specific:

Provider Type Base URL Notes
Amazon Bedrock OpenAICompatible Requires SDK adapter Enterprise AWS integration
Cloudflare Workers AI OpenAICompatible https://api.cloudflare.com/client/v4/accounts/{id}/ai/v1 Edge inference
Nvidia NIM OpenAICompatible https://integrate.api.nvidia.com/v1 GPU-optimized inference
Yi / 零一万物 OpenAICompatible https://api.lingyiwanwu.com/v1 Yi-Lightning
Baichuan / 百川 OpenAICompatible https://api.baichuan-ai.com/v1 Chinese LLM
Volcengine / 豆包 OpenAICompatible https://ark.cn-beijing.volces.com/api/v3 ByteDance Doubao
SiliconFlow OpenAICompatible https://api.siliconflow.cn/v1 Chinese model aggregator

Known Issues & Tips

  • OpenAI direct API: GPT-4o only supports max_tokens = 16384. Add max_tokens = 16384 to profile config (v0.1.2+).
  • --no-chrome: Claudex v0.1.2+ automatically injects --no-chrome to avoid Chrome integration conflicts.
  • OAuth token refresh: OpenAI (Codex CLI) tokens auto-refresh when expired (v0.1.2+).
  • Ollama: Use api_key = "ollama" (any non-empty value works, Ollama doesn't validate keys).

How to Contribute

  1. Quick test: Add a profile to your config, run:
    claudex run <profile> -p "hello" --dangerously-skip-permissions --no-session-persistence --no-chrome --disable-slash-commands --tools "" --output-format text
  2. Config PR: Add a working profile to config.example.toml
  3. Translation fix: Fix API format differences in src/proxy/translation.rs
  4. OAuth flow: Test claudex auth login <provider> and report results

Profile Config Template

[[profiles]]
name = "provider-name"
provider_type = "OpenAICompatible"
base_url = "https://api.example.com/v1"
api_key = "your-api-key"
default_model = "model-name"
# max_tokens = 16384  # uncomment if provider has a lower limit
priority = 70
enabled = true

[profiles.models]
haiku = "small-model"
sonnet = "default-model"
opus = "large-model"

[profiles.custom_headers]
[profiles.extra_env]

See CONTRIBUTING.md for development setup.


征集社区提供商测试与贡献

中文

Claudex 通过翻译代理支持将 Claude Code 路由到多个 AI 提供商。但是,我们没有所有平台的 API Key,无法独自测试所有提供商。

已测试 (v0.1.2)

提供商 Profile 名 状态 备注
OpenRouter → Claude openrouter-claude ✅ 正常 正确识别为 Claude / Anthropic
OpenRouter → GPT openrouter-gpt ✅ 正常 通过 Claude Code 系统提示词响应
OpenRouter → Gemini openrouter-gemini ✅ 正常 正确识别为 Gemini / Google
OpenRouter → DeepSeek openrouter-deepseek ✅ 正常 正确识别为 DeepSeek-V3
OpenRouter → Grok openrouter-grok ✅ 正常 正确识别为 Grok / xAI
OpenRouter → Qwen openrouter-qwen ✅ 正常 正确识别为 Qwen3 / 阿里云
OpenRouter → Llama openrouter-llama ✅ 正常 正确识别为 llama-4-maverick
MiniMax (Claude 代理) minimax ✅ 正常 DirectAnthropic 直通
OpenAI (直连 API) openai ✅ 正常 需配置 max_tokens = 16384
Z.AI / GLM (直连 API) glm ✅ 正常 GLM-4.6,api.z.ai
Kimi / Moonshot (直连 API) kimi ✅ 正常 Kimi-K2,api.moonshot.ai
Ollama (本地) ollama ✅ 正常 gpt-oss:20b,localhost:11434
OpenAI (Codex CLI OAuth) codex-sub ✅ 正常 OAuth 订阅,token 自动刷新
Anthropic (直连 API) anthropic ❌ Key 问题 组织已禁用(非 Claudex bug)

需要测试(按开发者使用频率排序)

高优先级 — 开发者高频使用:

提供商 类型 Base URL 备注
Groq OpenAICompatible https://api.groq.com/openai/v1 超快推理,有免费额度
Mistral OpenAICompatible https://api.mistral.ai/v1 开源模型,欧洲市场主力
Google AI Studio / Gemini OpenAICompatible https://generativelanguage.googleapis.com/v1beta/openai Gemini 直连(非 OpenRouter)
Azure OpenAI OpenAICompatible https://{resource}.openai.azure.com/... 企业部署
Perplexity AI OpenAICompatible https://api.perplexity.ai 搜索增强生成

中优先级 — 增长中的开发者社区:

提供商 类型 Base URL 备注
Cohere OpenAICompatible https://api.cohere.com/v2 Command R+,RAG 场景
Cerebras OpenAICompatible https://api.cerebras.ai/v1 超快推理硬件
Together AI OpenAICompatible https://api.together.xyz/v1 开源模型托管
Fireworks AI OpenAICompatible https://api.fireworks.ai/inference/v1 快速推理
GitHub Models OpenAICompatible https://models.inference.ai.azure.com GitHub 账号免费额度

低优先级 — 区域或垂直场景:

提供商 类型 Base URL 备注
Amazon Bedrock OpenAICompatible 需 SDK 适配 AWS 企业集成
Cloudflare Workers AI OpenAICompatible https://api.cloudflare.com/... 边缘推理
Nvidia NIM OpenAICompatible https://integrate.api.nvidia.com/v1 GPU 优化推理
Yi / 零一万物 OpenAICompatible https://api.lingyiwanwu.com/v1 Yi-Lightning
Baichuan / 百川 OpenAICompatible https://api.baichuan-ai.com/v1 中文大模型
火山引擎 / 豆包 OpenAICompatible https://ark.cn-beijing.volces.com/api/v3 字节跳动豆包
SiliconFlow OpenAICompatible https://api.siliconflow.cn/v1 国内模型聚合

已知问题与提示

  • OpenAI 直连: GPT-4o 仅支持 max_tokens = 16384,需在 profile 配置中设置(v0.1.2+)
  • --no-chrome: Claudex v0.1.2+ 自动注入 --no-chrome 避免 Chrome 集成冲突
  • OAuth token 刷新: OpenAI (Codex CLI) token 过期后自动刷新(v0.1.2+)
  • Ollama: api_key 填任意非空值即可(如 "ollama"),Ollama 不校验 key

Profile 配置模板

[[profiles]]
name = "provider-name"
provider_type = "OpenAICompatible"
base_url = "https://api.example.com/v1"
api_key = "your-api-key"
default_model = "model-name"
# max_tokens = 16384  # 如果 provider 有更低的上限则取消注释
priority = 70
enabled = true

[profiles.models]
haiku = "small-model"
sonnet = "default-model"
opus = "large-model"

[profiles.custom_headers]
[profiles.extra_env]

如何贡献

  1. 在配置中添加 profile,运行测试命令,反馈结果
  2. 提交 config.example.toml 的 PR
  3. src/proxy/translation.rs 中修复翻译问题
  4. 测试 claudex auth login <provider> 并反馈

详见 CONTRIBUTING.md

Metadata

Metadata

Assignees

No one assigned

    Labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions