You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Requests using semantic search started failing recently when it used to work with the same DEFAULT_FILES_CONFIG=embedding_model=ollama/bge-m3. It seems to 404'ing somewhere along the way, but the logs don't help very much:
Getting text from response
2025/03/13 10:54PM 30 pid=28 hostname=lobe msg=Error in tRPC handler (lambda) on path: chunk.semanticSearchForChat, type: mutation
Error [TRPCError]:
at o (.next/server/chunks/54689.js:1:1504)
at f (.next/server/chunks/54689.js:4:286)
at async f (.next/server/chunks/54689.js:4:68)
at async f (.next/server/chunks/54689.js:4:68)
at async f (.next/server/chunks/54689.js:4:68)
at async f (.next/server/chunks/54689.js:4:68)
at async r (.next/server/chunks/54689.js:1:8297)
at async (.next/server/chunks/78604.js:1:25947) {
code: 'INTERNAL_SERVER_ERROR',
[cause]: Error:
at <unknown> (.next/server/chunks/54689.js:1:1815)
at new a (.next/server/chunks/54689.js:1:1857)
at o (.next/server/chunks/54689.js:1:1504)
at f (.next/server/chunks/54689.js:4:286)
at async f (.next/server/chunks/54689.js:4:68)
at async f (.next/server/chunks/54689.js:4:68)
at async f (.next/server/chunks/54689.js:4:68)
at async f (.next/server/chunks/54689.js:4:68)
at async r (.next/server/chunks/54689.js:1:8297) {
�
error: {
message: '<html>\r\n' +
'<head><title>404 Not Found</title></head>\r\n' +
'<body>\r\n' +
'<center><h1>404 Not Found</h1></center>\r\n' +
'<hr><center>nginx</center>\r\n' +
'</body>\r\n' +
'</html>\r\n',
name: 'ResponseError',
status_code: 404
},
errorType: 'OllamaBizError',
provider: 'ollama'
}
}
I've confirmed that the bge-m3 model is downloaded, and that ollama chat still works. This is a newly introduced bug.
📷 Recurrence Steps
No response
🚦 Expected Behavior
No response
📝 Additional Information
No response
The text was updated successfully, but these errors were encountered:
Thank you for raising an issue. We will investigate into the matter and get back to you as soon as possible.
Please make sure you have given us as much context as possible.
非常感谢您提交 issue。我们会尽快调查此事,并尽快回复您。 请确保您已经提供了尽可能多的背景信息。
Hey @ianks! I'm here to help you with any bugs, questions, or contributions. Let's tackle this issue together!
A similar issue was previously reported and closed: "[Bug] Can't use Ollama custom model" [1]. The issue was resolved by a pull request that fully supported Ollama with browser request mode [2].
For your current issue, a 404 error in the context of semantic search using the Ollama model might be caused by an invalid proxy URL configuration. This error can occur if the Ollama is configured with an HTTP proxy URL, leading to URL construction errors. Removing the Ollama proxy URL configuration might resolve the issue [3].
Ensure that your configuration settings, especially related to the proxy, are correct. If the problem persists, reviewing the changes made in the pull request that addressed the previous issue might provide additional insights [2].
📦 Platform
Self hosting Docker
📦 Deploymenet mode
server db(lobe-chat-database image)
📌 Version
1.70.11
💻 Operating System
Other Linux
🌐 Browser
Chrome
🐛 Bug Description
Requests using semantic search started failing recently when it used to work with the same
DEFAULT_FILES_CONFIG=embedding_model=ollama/bge-m3
. It seems to 404'ing somewhere along the way, but the logs don't help very much:I've confirmed that the
bge-m3
model is downloaded, and that ollama chat still works. This is a newly introduced bug.📷 Recurrence Steps
No response
🚦 Expected Behavior
No response
📝 Additional Information
No response
The text was updated successfully, but these errors were encountered: