feat: add Novita AI as a native LLM provider#4939
feat: add Novita AI as a native LLM provider#4939Alex-wuhu wants to merge 1 commit intocrewAIInc:mainfrom
Conversation
Add native support for Novita AI's OpenAI-compatible API (https://api.novita.ai/openai) as a first-class LLM provider. - New NovitaCompletion class extending OpenAICompletion - API key via NOVITA_API_KEY env var or constructor param - Supported models: moonshotai/kimi-k2.5 (default), zai-org/glm-5, minimax/minimax-m2.5 - Auto-detection of vendor/model IDs (e.g. "moonshotai/kimi-k2.5") routes to Novita without requiring "novita/" prefix Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
| if self.client_params: | ||
| client_params.update(self.client_params) | ||
|
|
||
| return client_params |
There was a problem hiding this comment.
Novita models get wrong context window size
Medium Severity
NovitaCompletion inherits get_context_window_size() from OpenAICompletion, which only checks OpenAI-specific model prefixes (e.g. gpt-4, o1). Since Novita models like moonshotai/kimi-k2.5 don't match any OpenAI prefix, the method returns the default ~6,963 tokens instead of the intended ~111,411 tokens (131072 × 0.85). The entries added to LLM_CONTEXT_WINDOW_SIZES are dead code here — that dict is only consumed by the LLM (litellm) class's override, not by native provider classes. Every other native provider (Anthropic, Gemini, Bedrock) overrides get_context_window_size() with its own model-specific windows; NovitaCompletion needs the same.
Additional Locations (1)
|
This PR is stale because it has been open for 45 days with no activity. |


Summary
https://api.novita.ai/openai)NovitaCompletionclass that extendsOpenAICompletion— minimal code, inherits all chat completions, streaming, tool-calling, and structured output supportnovitain the provider routing (SUPPORTED_NATIVE_PROVIDERS,provider_mapping,_get_native_provider, etc.)moonshotai/kimi-k2.5(default),zai-org/glm-5,minimax/minimax-m2.5Usage
API key is read from
NOVITA_API_KEYenvironment variable or passed asapi_keyconstructor parameter.Design decisions
api="completions"— the Responses API is OpenAI-specific and not supported by Novitamoonshotai/kimi-k2.5) are auto-detected viaNOVITA_MODELSlookup before the generic/-split logic kicks in, so users don't need thenovita/prefixTest plan
LLM(model="novita/moonshotai/kimi-k2.5")routes toNovitaCompletionLLM(model="moonshotai/kimi-k2.5")auto-detects Novita providerNOVITA_API_KEYenv var is picked up correctlyNote
Medium Risk
Introduces a new native provider and modifies LLM provider inference/routing, which could change how some
modelstrings are resolved and instantiated at runtime. Low algorithmic complexity, but affects core LLM initialization paths and requires correct API key/base URL configuration.Overview
Adds Novita AI as a native LLM provider backed by Novita’s OpenAI-compatible endpoint.
Updates
LLMprovider routing to recognizenovita(including vendor-prefixed model IDs likemoonshotai/kimi-k2.5), addsNOVITA_MODELSconstants, and registers context window sizes for the supported Novita models.Introduces
NovitaCompletion, a thin wrapper overOpenAICompletionthat defaults the base URL tohttps://api.novita.ai/openai, requiresNOVITA_API_KEY, and forces use of the OpenAI Completions API.Written by Cursor Bugbot for commit 2bef65c. This will update automatically on new commits. Configure here.