-
Thanks for an amazing project. Great to see how it's evolving fast. I'm trying to use web search for RAG using SearXNG. My SearXNG instance seems to be working well with output provided in JSON and no rate limiting. Search Result Count is set to 3 and Concurrent Requests is to 10. Most of the time, Open-WebUI eventually says "No results found" and the LLM (in my case llama3-8b) doesn't provide a response. I'm not sure what's happening. When debugging, I see the search show up in the SearXNG terminal output. Open-WebUI also quickly receives results that are crawled, also in the Open-WebUI terminal output. The last line the log is something like There have been a couple instances (out to dozens) where it has worked (on 0.2.3), but I'm not really sure what's different. In those cases, despite the above search count parameters, it usually crawled about 50 websites. (It seems 0.2.4 might have addressed the count parameter issues?) Also, there are a couple instances where Open-WebUI took the chat prompt and generated a better search query. In most cases, it does not, and just uses the chat prompt verbatim, which I don't think is the intended way for this to work? I've also tried the 0.2.4 release, and I've tried adjusting result counts, in case there's some sort of timing out happening. I'm running on a M2 Mac, and in general llama3-8b is snappy, even with documents provided by RAG. On 0.2.4, I have not been able to get it to work at all yet. Appreciate any guidance to help set this new feature up correctly and/or troubleshoot. Thanks. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
While I didn't have problems using nginx before, apparently this triggered it. The config described here #2380 (comment) solved my issues. |
Beta Was this translation helpful? Give feedback.
While I didn't have problems using nginx before, apparently this triggered it. The config described here #2380 (comment) solved my issues.