-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
For some reason ragpipeline stopped working out of nowhere TypeError: SimpleHybridRetriever._aretrieve() missing 1 required positional argument: 'query' #1222
Comments
Is it reproducible, could you provide more details, such the code that was executed. |
Hello, I solved it - it seems the dependencies for the RAG were not updated properly. But, now the bigger issue is that it gives this -
I remember that I had fixed it somehow in the utils.py from the llama_core package
But after updating I cant remember what the issue was or how I had fixed it. I am prompting the engine with quite the long thing so thats what is maybe causing its malformed response. |
This is because the answer from the LLM is incorrect. |
I am using gpt-4-turbo-preview which I have always been using it and it used to work properly because i remember fixing this once and i remember that this specific code caused the issue as the answer could sometimes be a weird problematic response. --- And after some testing I remember that the issue is when the answer from the query provides a list with colons in it it screws up the process, so it doesnt matter what LLM you are using, if it returns a Value1: key1, key2 key then value2: key, key key it breaks and leads to an empty response |
Actually my hardcoing never to use ":" doesnt work at all it seems now that i actually test it. The issue is when it creates multiple lines from a single answer. I.e. creates multiple answers for a single query and it always fucks up the process then. And i cant for the life of me remember how to fix it now. I keep getting either empty responses or File "E:\Project\MetaStocky\env\Lib\site-packages\llama_index\core\indices\utils.py", line 104, in default_parse_choice_select_answer_fn |
When using an LLM to rerank, its not always guaranteed that the output will be parseable for reranking. Correct Format Wrong Format |
The text was updated successfully, but these errors were encountered: