-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Running the chatbot customization tutorial doesn't load context data #7623
Comments
🤖 Hello, Thank you for reaching out and providing detailed information about your issue. Based on the context provided, it seems like the issue might be related to the way the In the LlamaIndex codebase, there is a temporary hack to toggle streaming on and off for the query engine. This is done by checking if the query engine is an instance of Here is the relevant code snippet: from llama_index.query_engine.retriever_query_engine import RetrieverQueryEngine
if isinstance(self._query_engine, RetrieverQueryEngine):
is_streaming = self._query_engine._response_synthesizer._streaming
self._query_engine._response_synthesizer._streaming = False This hack is used because the query engine currently uses a class attribute to configure streaming, and the developers are in the process of moving towards separate streaming and non-streaming methods. If the query engine you are using is not an instance of Additionally, the Without more specific information about the error or issue you are experiencing, it's difficult to provide a more precise answer. Could you please provide more details about the error message or the incorrect response you are getting? This will help us to better understand and address the issue. Thank you for your patience and cooperation. SourcesThis response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
@robgon-art The default chat engine is an agent -- and an agent decides whether to query the index or not based on the user message and description of the index. Tbh, this default mode could be improved. But you can try passing setting the index to be called at least once on every user message with A Or, you can setup the agent from scratch with a better name/description Or, you can try |
Hi Logan, OK, thanks for the info. Setting chat_mode="context" worked just fine. This also led me to the page below, which basically said the same thing: https://gpt-index.readthedocs.io/en/v0.7.18/examples/chat_engine/chat_engine_context.html Maybe add this to the sample code here?
|
Bug Description
Hello, great project! I was able to run the Starter Tutorial just fine on a Google Colab running llama_index v0.8.23.post1 using my OpenAI API token.
But when I tried to run the Customization Tutorial called "I want a chatbot instead of Q&A," it didn't seem to load any context from the index, and the response was wrong.
Am I missing something? Or is there a bug with loading a context using the chat engine?
Version
0.8.23.post1
Steps to Reproduce
First, here's my code that works using
query_engine.query()
The above worked fine. But this doesn't seem to work using
query_engine.chat()
Relevant Logs/Tracbacks
The text was updated successfully, but these errors were encountered: