-
Notifications
You must be signed in to change notification settings - Fork 2.3k
Description
Problem (one or two sentences)
When use API Provider Anthropic and choose custom url, i want to make a custom model too and not to be block to only the current list of model who are only claude-xxx models
Because i can use an endpoint like https://api.z.ai/api/anthropic with model like glm-4.6-cc-max or glm-4.5v who are really more speeder like https://api.z.ai/api/coding/paas/v4 (who are OpenAi compatible endpoint)
With that i can make too a profile to make a quick change to vision model capabilities instead of using mcp server for it
ruby .\llm_benchmark.rb --provider z-ai --model glm-new
=== LLM Benchmark ===
Provider: z-ai (https://api.z.ai/api/coding/paas/v4)
Model: glm-new (glm-4.6)
Starting benchmark...
Start time: 2025-10-03 14:27:56.436
End time: 2025-10-03 14:28:08.199
=== Results ===
Duration: 11.763 seconds
Input tokens: 45
Output tokens: 1000
Total tokens: 1045
Tokens per second: 88.84
PS C:\Users\cobra.claude> ruby .\llm_benchmark.rb --provider z-ai-anthropic --model glm-new
=== LLM Benchmark ===
Provider: z-ai-anthropic (https://api.z.ai/api/anthropic/v1)
Model: glm-new (glm-4.6)
Starting benchmark...
Start time: 2025-10-03 14:28:25.890
End time: 2025-10-03 14:28:34.620
=== Results ===
Duration: 8.729 seconds
Input tokens: 43
Output tokens: 1000
Total tokens: 1043
Tokens per second: 119.48

Context (who is affected and when)
user of z.ai coding plan
Desired behavior (conceptual, not technical)
able to write any model instead of only the initial list
Constraints / preferences (optional)
No response
Request checklist
- I've searched existing Issues and Discussions for duplicates
- This describes a specific problem with clear context and impact
Roo Code Task Links (optional)
No response
Acceptance criteria (optional)
No response
Proposed approach (optional)
No response
Trade-offs / risks (optional)
No response
Metadata
Metadata
Assignees
Labels
Type
Projects
Status