-
Notifications
You must be signed in to change notification settings - Fork 641
Description
Describe the bug
The @genkit-ai/compat-oai plugin has a model name parsing issue when the actual model ID contains slashes (/). The defineCompatOpenAIModel function extracts the model name by removing only the first segment (plugin name), but this causes problems when model IDs themselves contain slashes.
This issue affects users working with:
Ollama with models from HuggingFace (e.g., hf.co/namespace/model:tag)
Docker Model Runner with namespaced models (e.g., ai/model:tag, mistral/model:tag)
...
To Reproduce
🟢 Works: Simple model name
const response = await ai.generate({
model: "openai/qwen2.5:0.5b",
prompt: 'Who is Jean-Luc Picard?',
config: {
temperature: 0.9,
},
});🔴 Fails: Model name with slashes
const response = await ai.generate({
model: "openai/hf.co/Menlo/Jan-nano-gguf:Q4_K_M",
prompt: 'Who is Jean-Luc Picard?',
config: {
temperature: 0.9,
},
});- Parsed model name:
hf.co/Menlo/Jan-nano-gguf:Q4_K_M - But somewhere in the process, only
Menlo/Jan-nano-gguf:Q4_K_Mis sent to the API - Error:
NotFoundError: 404 model 'Menlo/Jan-nano-gguf:Q4_K_M' not found
Workaround: Override model in config
const response = await ai.generate({
model: "openai/hf.co/Menlo/Jan-nano-gguf:Q4_K_M",
prompt: 'Who is Jean-Luc Picard?',
config: {
temperature: 0.9,
model: 'hf.co/Menlo/Jan-nano-gguf:Q4_K_M', // Override with full model ID
},
});Expected behavior
The plugin should correctly handle model names containing multiple slashes and send the complete model ID to the API without requiring the config.model workaround.
Runtime (please complete the following information):
- OS: macOS
- Version
26.0.1
Node version:
v23.10.0
Metadata
Metadata
Assignees
Labels
Type
Projects
Status