When configuring codebase indexing to use an OpenAI-compatible embedding provider like Featherless AI, the API key is never saved, the model list is hardcoded to only OpenAI's own models (ignoring the provider's actual models), and the configuration silently resets or fails to persist.
Summary
Roo Code's codebase indexing feature fails to properly configure and persist settings when using an OpenAI-compatible embedding provider (Featherless AI). Three distinct bugs have been identified:
- API key is not saved/persisted for the code indexing embedder
- OpenAI-compatible model list is hardcoded and ignores models actually available from the provider
- Configuration silently resets when attempting to set up a custom embedding provider
Bug 1: API Key Not Saved for Codebase Indexing
The codebase indexing configuration does not store an API key field. When the user configures an OpenAI-compatible embedding provider, there is no mechanism to save the API key, causing authentication failures when the indexer attempts to call the embedding API.
The main API configs section does store an API key:
"apiConfigs": {
"default": {
"openAiApiKey": "<key>",
...
}
}
But the codebase indexing config section has no API key field at all:
"codebaseIndexConfig": {
"codebaseIndexEnabled": true,
"codebaseIndexQdrantUrl": "http://localhost:6333",
"codebaseIndexEmbedderProvider": "openai-compatible",
"codebaseIndexEmbedderBaseUrl": "https://api.featherless.ai/v1",
"codebaseIndexEmbedderModelId": "Qwen/Qwen3-Embedding-8B",
"codebaseIndexEmbedderModelDimension": 4096,
...
}
There should be a codebaseIndexEmbedderApiKey (or similar) field that stores the API key for the embedding provider, separate from the main chat API key. Currently no such field exists, so the indexer either calls the embedding API without authentication (and fails) or falls back to some default behavior silently.
Bug 2: Hardcoded OpenAI-Compatible Model List
The codebaseIndexModels section contains a hardcoded list of models for the openai-compatible provider that does not reflect what the actual provider (Featherless AI) offers:
"codebaseIndexModels": {
"openai-compatible": {
"text-embedding-3-small": { "dimension": 1536 },
"text-embedding-3-large": { "dimension": 3072 },
"text-embedding-ada-002": { "dimension": 1536 },
"nomic-embed-code": { "dimension": 3584 }
}
}
The user's actual model (Qwen/Qwen3-Embedding-8B with dimension 4096) is not in this list. Compare this to the openrouter provider which does include Qwen models:
"openrouter": {
...
"qwen/qwen3-embedding-0.6b": { "dimension": 1024 },
"qwen/qwen3-embedding-4b": { "dimension": 2560 },
"qwen/qwen3-embedding-8b": { "dimension": 4096 }
}
The openai-compatible provider should either dynamically fetch available models from the provider's /models endpoint, or allow the user to manually specify any model ID and dimension without requiring it to be in a hardcoded list. Instead, the hardcoded list only contains OpenAI's own embedding models plus nomic-embed-code, and any model outside this list is not recognized even though the user has manually entered it.
Bug 3: Configuration Silently Resets / Duplicate Fields
When the user attempts to configure the codebase indexing with a custom OpenAI-compatible provider, the settings appear to reset or fail to persist properly:
- The
codebaseIndexOpenAiCompatibleBaseUrl has a leading space: " https://api.featherless.ai/v1" — the field was not properly trimmed/sanitized on save
- There are two separate base URL fields for what should be the same thing:
codebaseIndexEmbedderBaseUrl (correct: "https://api.featherless.ai/v1")
codebaseIndexOpenAiCompatibleBaseUrl (has leading space: " https://api.featherless.ai/v1")
This duplication suggests the settings UI and the backend code are out of sync about which field to use, and input sanitization (trimming whitespace) is not being applied.
Context (who is affected and when)
Any user trying to use a third-party OpenAI-compatible embedding provider (like Featherless AI, LocalAI, Together AI, etc.) with Roo Code's codebase indexing feature. The only workaround is to use a provider that has a hardcoded entry in the model list (OpenAI, Ollama, Gemini, Mistral, Vercel AI Gateway, OpenRouter, or Bedrock). This effectively locks users out of using their own embedding infrastructure.
Reproduction steps
- Open Roo Code settings
- Navigate to the codebase indexing configuration section
- Set the embedder provider to "OpenAI Compatible"
- Enter a custom base URL (e.g.,
https://api.featherless.ai/v1)
- Enter a custom model ID (e.g.,
Qwen/Qwen3-Embedding-8B)
- Enter the model dimension (e.g.,
4096)
- Note that there is no field to enter an API key for the embedding provider
- Save the settings
- Export settings to JSON and observe:
- No API key field exists in
codebaseIndexConfig
- The model
Qwen/Qwen3-Embedding-8B is not in codebaseIndexModels["openai-compatible"]
- Two conflicting base URL fields exist, one with a leading space
- Attempt to use codebase indexing — it fails because the API cannot authenticate
Expected result
- An API key field is available and saved for the codebase indexing embedder - The
openai-compatible model list either dynamically fetches from the provider or allows free-form model ID entry - A single, unambiguous base URL field exists with proper input sanitization - Settings persist correctly across saves and restarts
Actual result
- No API key field exists for the codebase indexing embedder — authentication is impossible - The
openai-compatible model list is hardcoded to only OpenAI's models; custom models are ignored - Two conflicting base URL fields exist (codebaseIndexEmbedderBaseUrl and codebaseIndexOpenAiCompatibleBaseUrl), one with a leading space from lack of input sanitization - Configuration silently fails to work without any error message to the user
Variations tried (optional)
- Tried using the same API key as the main chat provider — no field exists to enter it
- Tried entering the model ID manually — it's saved in
codebaseIndexEmbedderModelId but not recognized because it's missing from the hardcoded codebaseIndexModels["openai-compatible"] list
- Tried different model IDs — same result, only hardcoded models are recognized
App Version
v3.53.0
API Provider (optional)
Featherless AI
Model Used (optional)
Qwen/Qwen3-Embedding-8B (for embeddings), deepseek-ai/DeepSeek-V4-Pro (for chat)
Roo Code Task Links (optional)
No response
Relevant logs or errors (optional)
When configuring codebase indexing to use an OpenAI-compatible embedding provider like Featherless AI, the API key is never saved, the model list is hardcoded to only OpenAI's own models (ignoring the provider's actual models), and the configuration silently resets or fails to persist.
Summary
Roo Code's codebase indexing feature fails to properly configure and persist settings when using an OpenAI-compatible embedding provider (Featherless AI). Three distinct bugs have been identified:
Bug 1: API Key Not Saved for Codebase Indexing
The codebase indexing configuration does not store an API key field. When the user configures an OpenAI-compatible embedding provider, there is no mechanism to save the API key, causing authentication failures when the indexer attempts to call the embedding API.
The main API configs section does store an API key:
But the codebase indexing config section has no API key field at all:
There should be a
codebaseIndexEmbedderApiKey(or similar) field that stores the API key for the embedding provider, separate from the main chat API key. Currently no such field exists, so the indexer either calls the embedding API without authentication (and fails) or falls back to some default behavior silently.Bug 2: Hardcoded OpenAI-Compatible Model List
The
codebaseIndexModelssection contains a hardcoded list of models for theopenai-compatibleprovider that does not reflect what the actual provider (Featherless AI) offers:The user's actual model (
Qwen/Qwen3-Embedding-8Bwith dimension 4096) is not in this list. Compare this to theopenrouterprovider which does include Qwen models:The
openai-compatibleprovider should either dynamically fetch available models from the provider's/modelsendpoint, or allow the user to manually specify any model ID and dimension without requiring it to be in a hardcoded list. Instead, the hardcoded list only contains OpenAI's own embedding models plusnomic-embed-code, and any model outside this list is not recognized even though the user has manually entered it.Bug 3: Configuration Silently Resets / Duplicate Fields
When the user attempts to configure the codebase indexing with a custom OpenAI-compatible provider, the settings appear to reset or fail to persist properly:
codebaseIndexOpenAiCompatibleBaseUrlhas a leading space:" https://api.featherless.ai/v1"— the field was not properly trimmed/sanitized on savecodebaseIndexEmbedderBaseUrl(correct:"https://api.featherless.ai/v1")codebaseIndexOpenAiCompatibleBaseUrl(has leading space:" https://api.featherless.ai/v1")This duplication suggests the settings UI and the backend code are out of sync about which field to use, and input sanitization (trimming whitespace) is not being applied.
Context (who is affected and when)
Any user trying to use a third-party OpenAI-compatible embedding provider (like Featherless AI, LocalAI, Together AI, etc.) with Roo Code's codebase indexing feature. The only workaround is to use a provider that has a hardcoded entry in the model list (OpenAI, Ollama, Gemini, Mistral, Vercel AI Gateway, OpenRouter, or Bedrock). This effectively locks users out of using their own embedding infrastructure.
Reproduction steps
https://api.featherless.ai/v1)Qwen/Qwen3-Embedding-8B)4096)codebaseIndexConfigQwen/Qwen3-Embedding-8Bis not incodebaseIndexModels["openai-compatible"]Expected result
openai-compatiblemodel list either dynamically fetches from the provider or allows free-form model ID entry - A single, unambiguous base URL field exists with proper input sanitization - Settings persist correctly across saves and restartsActual result
openai-compatiblemodel list is hardcoded to only OpenAI's models; custom models are ignored - Two conflicting base URL fields exist (codebaseIndexEmbedderBaseUrlandcodebaseIndexOpenAiCompatibleBaseUrl), one with a leading space from lack of input sanitization - Configuration silently fails to work without any error message to the userVariations tried (optional)
codebaseIndexEmbedderModelIdbut not recognized because it's missing from the hardcodedcodebaseIndexModels["openai-compatible"]listApp Version
v3.53.0
API Provider (optional)
Featherless AI
Model Used (optional)
Qwen/Qwen3-Embedding-8B (for embeddings), deepseek-ai/DeepSeek-V4-Pro (for chat)
Roo Code Task Links (optional)
No response
Relevant logs or errors (optional)