-
Notifications
You must be signed in to change notification settings - Fork 2.3k
Description
What kind of feedback?
Suggestion for new custom mode
Item Type (if applicable)
Custom Mode
Item Name (if applicable)
"CometAPI"
Description
Description
What specific problem does this solve?
Adds CometAPI as a model provider so users can access a wide range of high-quality models including GPT-5, Claude-4, Gemini-2.5, Grok-4, DeepSeek-v3.1, and Qwen3 series with competitive pricing and excellent API compatibility. CometAPI provides access to the latest cutting-edge models with reliable service and cost-effective pricing.
Additional context (optional)
CometAPI (https://www.cometapi.com/) offers:
- Latest Models: GPT-5 series, Claude-4 series, Gemini-2.5, Grok-4, DeepSeek-v3.1, and more
- OpenAI Compatible API: Seamless integration with existing OpenAI-compatible endpoints
- Competitive Pricing: Cost-effective access to premium models
- Reliable Service: Stable API with good uptime and performance
- Comprehensive Documentation: Well-documented API at https://api.cometapi.com/doc
This would give Roo Code users access to some of the most advanced AI models currently available, including the newest GPT-5 series and Claude-4 models that aren't widely available elsewhere.
Roo Code Task Links (Optional)
No response
Request checklist
- I've searched existing Issues and Discussions for duplicates
- This describes a specific problem with clear impact and context
Interested in implementing this?
Yes, I'd like to help implement this feature
Implementation requirements
I understand this needs approval before implementation begins
How should this be solved? (REQUIRED if contributing, optional otherwise)
Implement CometAPI as a new provider by following the same pattern used for existing providers like DeepInfra. The implementation should:
- Extend BaseOpenAiCompatibleProvider for consistency with existing patterns
- Use OpenAI-compatible endpoints at
https://api.cometapi.com/v1/
- Add API key support through environment variables and settings
- Fetch models dynamically from CometAPI's models endpoint:
https://api.cometapi.com/v1/models
- Route completions through CometAPI's OpenAI-compatible chat completions endpoint
- Include fallback models for when the API is unavailable
Key implementation details:
- Base URL:
https://api.cometapi.com/v1/
- Models endpoint:
https://api.cometapi.com/v1/models
- Chat completions:
https://api.cometapi.com/v1/chat/completions
- Authentication: Bearer token via
Authorization: Bearer sk-xxx
header
How will we know it works? (Acceptance Criteria - REQUIRED if contributing, optional otherwise)
Given CometAPI is selected and an API key is set,
When I open the model picker,
Then I see CometAPI's models loaded dynamically (including GPT-5, Claude-4, Gemini-2.5, etc.),
And I can select one and run a chat without errors,
But I don't see the picker if no API key is set.
Specific acceptance criteria:
- CometAPI appears as a selectable provider in the settings
- API key configuration is available in the settings UI
- Models are fetched dynamically from CometAPI's API when a valid key is provided
- Chat completions work correctly with selected CometAPI models
- Error handling for invalid API keys or network issues
- Fallback models are available when API is unreachable
- Provider is hidden/disabled when no API key is configured
Technical considerations (REQUIRED if contributing, optional otherwise)
- Use existing provider interface to maintain consistency with other providers
- Leverage fetcher interface and model cache to avoid performance issues
- Map chat messages to CometAPI's OpenAI-compatible endpoints format
- Handle rate limiting and error responses appropriately
- Support model parameters like temperature, max_tokens, etc.
- Implement proper typing for CometAPI-specific model definitions
Recommended models to include by default:
const COMETAPI_MODELS = [
// GPT series
"gpt-5-chat-latest",
"chatgpt-4o-latest",
"gpt-5-mini",
"gpt-5-nano",
"gpt-4.1-mini",
"gpt-4o-mini",
// Claude series
"claude-opus-4-1-20250805",
"claude-sonnet-4-20250514",
"claude-3-7-sonnet-latest",
"claude-3-5-haiku-latest",
// Gemini series
"gemini-2.5-pro",
"gemini-2.5-flash",
"gemini-2.0-flash",
// DeepSeek series
"deepseek-v3.1",
"deepseek-r1-0528",
"deepseek-reasoner",
// Other popular models
"grok-4-0709",
"qwen3-30b-a3b",
"qwen3-coder-plus-2025-07-22",
];
No major blockers identified - CometAPI follows OpenAI-compatible standards making integration straightforward.
Trade-offs and risks (REQUIRED if contributing, optional otherwise)
Trade-offs:
- Additional dependency on another external API service
- Need to maintain API key management for another provider
- Potential cost implications for users depending on CometAPI pricing
Risks:
- Low risk: CometAPI uses standard OpenAI-compatible API format
- Mitigation: Implement proper error handling and fallback mechanisms
- API availability: Monitor CometAPI service reliability (current status appears stable)
- Rate limiting: Implement appropriate retry logic and respect API limits
Benefits outweigh risks:
- Access to cutting-edge models (GPT-5, Claude-4) not available elsewhere
- Competitive pricing benefits for users
- Diversifies model provider options for better resilience
Additional Information
CometAPI Resources:
- Website: https://www.cometapi.com/
- Documentation: https://api.cometapi.com/doc
- Pricing: https://api.cometapi.com/pricing
- API Key: https://api.cometapi.com/console/token
- Base URL: https://api.cometapi.com/v1/
Example API Call:
curl -X GET "https://api.cometapi.com/v1/models" \
-H "Authorization: Bearer sk-YOUR_API_KEY" \
-H "Accept: application/json"
This integration would significantly expand Roo Code's model capabilities and give users access to some of the most advanced AI models available today.
Additional Details (optional)
No response
Checklist
- I've searched existing issues for duplicates
Metadata
Metadata
Assignees
Labels
Type
Projects
Status