You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I tried to integerate the german embedding model joanfm/jina-embeddings-v2-base-de , into my LlamaIndex RAG application. During the creation of the embeddings the process ollama fails with error 500: llama runner process has terminated: exit status 0xc0000409.
When calling:
pass_embedding=Settings.embed_model.get_text_embedding_batch(
["This is a passage!", "This is another passage"], show_progress=True
)
Ollama doesn't currently support Jina Embeddings v2, it should be supported after #4414 gets merged, so you'd likely have to wait for the new Ollama release or build from source after the PR has been merged.
thanks for your reply.
Do you or some other here, now what the status of batch processing of embeddings with ollama is?
Without it, the feature is useless for my intended use.
What is the issue?
I tried to integerate the german embedding model joanfm/jina-embeddings-v2-base-de , into my LlamaIndex RAG application. During the creation of the embeddings the process ollama fails with error 500: llama runner process has terminated: exit status 0xc0000409.
When calling:
With mxbai-embed-large:latest this works without an error.
OS
Windows
GPU
Nvidia
CPU
Intel
Ollama version
0.1.37
The text was updated successfully, but these errors were encountered: