-
Notifications
You must be signed in to change notification settings - Fork 385
Description
🐛 Bug description [Please make everyone to understand it]
Bug
MineContext fails to validate a custom local embedding server even when the server correctly implements the OpenAI-compatible /v1/embeddings API.
What happens
Custom VLM validation can pass
Embedding validation fails and shows up as a timeout in the UI
Backend appears to call:
client.multimodal_embeddings(...)
This crashes with:
AttributeError: 'OpenAI' object has no attribute 'multimodal_embeddings'
Why this is a problem
multimodal_embeddings() is not part of the standard OpenAI Python client. It seems MineContext is using a Doubao-specific method even in the Custom provider flow.
As a result, OpenAI-compatible local embedding servers cannot be used, even if they are working correctly.
🧑💻 Step to reproduce
Repro
Start a local server with OpenAI-compatible /v1/embeddings
In MineContext, choose Custom
Set Base URL to http://localhost:8000
Enter API key + model name
👾 Expected result
MineContext should validate the custom endpoint using the standard embeddings API.
but Validation crashes internally and the UI reports a timeout/failure.
🚑 Any additional information
Suggested fix
Do not call Doubao-specific SDK methods in the Custom provider path
Use the standard embeddings client/API for custom endpoints
Catch and surface backend exceptions instead of failing silently
🛠️ MineContext Version
newest
💻 Platform Details
App: MineContext desktop app
OS: macOS
Hardware: Apple Silicon Mac
Local model server: FastAPI at http://localhost:8000
Embedding backend: MLX + Qwen3-VL-Embedding
API format: OpenAI-compatible /v1/embeddings
VLM stub endpoint: OpenAI-compatible /v1/chat/completions