-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hanging forever if the context is too large #5
Comments
Hi @Happiness777, thanks for trying Open Interpreter and for the kind words! It just did the same for me. Let's add Adding LLM powered summarizing to import tokentrim as tt
response = openai.ChatCompletion.create(
model=model,
messages=tt.trim(messages, model, summarize=True) # Summarizes old messages to fit under model's max token count
) Then we could default to this way of doing it in Open Interpreter as long as Anyway will add --debug so we can see what's wrong and fix it 👍 |
Yes, I've added a debug message whenever the trimming happens, and it hangs just on this moment. Not on message sending, not anything else, but on the token trimming. I've also added a debug message counting all tokens, and I didn't even get to 4k, I only had 3k (2917 to be exact) tokens history when this happened. |
Btw, what happens if the file/output is too large for the context? It tries to summarize it or just truncates it to fit the context limit? |
Yeesh. Will add more accurate tests in If the output of any code block is too large, Thinking of adding an option for users to change that number of characters, and maybe even for the LLM to change it (I could imagine some "cell magic" at the top of any code block like Summarizing cell outputs is a fantastic idea. I could imagine more cell magic: |
Not just summarisation though. The summarisation might be with a certain message/goal which the LLM could set for the summarizer. For example, find the error, or something like that. I imagine we could even go beyond context limit, because the LLM could write down the desired output in the file, enabling it to remember it and have more lengthy final output. It could work with data in chunks, step-by-step, while not forgetting about the bigger picture. This could be interesting for story writing too, I think. Once, a few days ago, I had gpt-4 doing smart decisions with the file, like looking for a specific string and grabbing a chunk of file with this string to check what's happening there. This was really interesting to watch. P.S. |
First and foremost amazing job @KillianLucas and anyone else involved with this project. For what it is worth I have zero coding knowledge whatsoever. But with AI I have learned over the last few months how to vaguely navigate things. I have been using this a lot in the last day, and really had some fun and good success with it! I do have it hang up on me still at that context max length and I'm thinking its still this same topic you are discussing here. I was able to make a bigger project by prompting it to do something similar to what you are suggesting here. I asked it for the specific steps needed to make a project and had it break the overall project into smaller project files that would need to be written. I had it save all of that info into a text file in a project folder. I then started fresh and pointed it to that folder and that file and told it to A. check if the file names needed in the instruction file where present in the folder, if a name was not present it was to start coding it. Every time it would hang up I would close it and rerun it with the same or similar prompt and it would start making additional missing files. At the end I restarted and prompted it to go through all the files in the folder and suggest any changes (I then took those changes and put it in a file and reran this process changing the prompt slightly) and with a little tinkering I was able to get some really impressive results. Not sure if this could be useful to anyone but figured I would share. Again, keep up the great work! I just bought Baldur's gate 3 and Starfield and paid extra for early access... yet here I am, playing with this for the last 24 hours. |
This is the greatest review of all time lmao Thank you so much for using it! I really want this to be a 100% no-code experience soon, not even having to open terminal. Great to hear it's still fun to use, that's the goal exactly. I'm happy to report that this issue should be patched as of v0.0.297. It was a problem with tokentrim's ability to handle messages with function calls, which has now been resolved. Let me know if you're still having any issues whatsoever. |
Can you leave a cli option when you'll do it though? |
Hello. Today I somehow got this issue again. It's just hanging right now, forever |
Yes, the bug is certainly still present |
Interestingly enough, I managed to get past it this time via keyboard interrupt, lol. But I got this error: |
Hi. I am using Code interpreter on my windows machine, and I definitely and repeatedly have this problem too. It may have something to do with context getting overrun. I am investigating further. |
Hello, I'm having a blast experimenting with this thing, and in my opinion it has a lot of potential.
I'd like to report one annoying issue though, the program is forever hanging on the response, if the context grows too large. Are there any plans to make it summarize the context if it's getting too big?
The text was updated successfully, but these errors were encountered: