-
Notifications
You must be signed in to change notification settings - Fork 43.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Gets stuck in a loop #1994
Comments
Monitored a similar behaviour in my own ChatGPT frontend on GPT 3.5. Sometimes it helps to add a message like |
Post your ai_settings.yaml in the prompt field. The bug template is misleading. |
|
I have the same issue. 😳 |
same here NEXT ACTION: COMMAND = google ARGUMENTS = {'input': 'how to install python on my system'} THOUGHTS: Based on the search results, it appears that I need to download and install Python before proceeding with the installation of chromedriver. I will use the 'google' command to search for the correct syntax for installing Python on my system. REASONING: Based on the search results, it appears that I need to download and install Python before proceeding with the installation of chromedriver. To do this, I will use the 'google' command to search for the correct syntax for installing Python on my system. PLAN
CRITICISM: I need to ensure that the search results are relevant and that the information is accurate before proceeding with the installation of Python. (restarts the loop).... |
Having the same issue, does anybody know if changing the memory backend helps with this? |
same here |
1 similar comment
same here |
even with the simplest prompt it gets stuck querying the same thing over and over, I am using |
Same here, stuck in a loop for a while now SYSTEM: Command google returned: Error: 'input' maybe google blocked because it detected it was a bot? |
I'm stuck in a loop too, I'm getting this error |
regarding loops getting stuck, there's the idea to hash the LLM parameters and treat those as lookup keys into hash table/dictionary - that way, we can easily increment a counter for any loop that's getting invoked over and over again using the same parameters, without making progress (aka the return value from the LLM being the same too), to be able to bail out: #3444 (comment) |
I am not programmer, but I have tried this and it seems to work when it gets into a loop:
Tell another IA, for example that with comes with edge this: Paste in input:
|
What is your temperature set at for the AI? If it's set at 0, this could be expected, as closer to 0 the temp is, the more deterministic the AI is. When you then consider that we are including the previous contexts in the embeddings for each interaction, this makes sense that this could be happening as every time it loops, it reinforces that it should keep doing the same thing. |
@SuperYeti i used the settings that were already preset there. And yes it was 0. But no matter what the value is, it should not perform the same action twice (and it did perform the same action a lot of times). It already has an answer on the question so it should read it from the "memory". Also if it already has an answer then it needs to search for something other or "give up" and show the results. The definition of insanity is doing the same thing over and over and expecting different results :D |
There is a relatively straightforward way to accomplish this by hashing each action/response and using that as an index into a hash table/dictionary to increment a counter - once that number is > 1, the loop is running again. Also hashing the LLM response would even let us detect an "infinite loop" (input params and return values always being the same).
the issue is detecting that it already finished a task, which is why adding explicit state to disk (like a file) and conditional checks to determine if the step can be skipped or needs to be executed. Basically, you would need one agent to execute an outer unit test (that is persistent/using serialization) and use that to run an inner agent to come up with with a solution to your problem. Absent that, it's extremely difficult to "memorize" it having a "solution" - keep in mind the open-ended nature of potential solutions. |
Is there a way i can accomplish this with settings or is it for the developers to implement it?
I am currently looking at it as an user that is trying a new shiny tool and i do not understand why it gets to a loop. I did not look at the code yet. |
Yes, there are probably some changes that need to be made in the code to
handle loop detection, and also ensuring context isn't getting duplicated.
In the interim, you can try increasing temperature to 0.5 or higher (up to
a maximum of 1) and it will allow the AI to be more creative, and may not
get stuck in a loop as frequently.
…On Wed, May 3, 2023 at 12:00 AM AoiRei ***@***.***> wrote:
There is a relatively straightforward way to accomplish this by hashing
each action/response and using that as an index into a hash
table/dictionary to increment a counter - once that number is > 1, the loop
is running again. Also hashing the LLM response would even let us detect an
"infinite loop" (input params and return values always being the same).
Is there a way i can accomplish this with settings or is it for the
developers to implement it?
the issue is detecting that it already finished a task, which is why
adding explicit state to disk (like a file) and conditional checks to
determine if the step can be skipped or needs to be executed. Basically,
you would need one agent to execute an outer unit test (that is
persistent/using serialization) and use that to run an inner agent to come
up with with a solution to your problem. Absent that, it's extremely
difficult to "memorize" it having a "solution" - keep in mind the
open-ended nature of potential solutions.
I am currently looking at it as an user that is trying a new shiny tool
and i do not understand why it gets to a loop. I did not look at the code
yet.
But if i was looking at it as a programmer, then i would assume that any
agent has access to shared database of questions/answers, therefore any
agent can check if the question was already answered and not to ask it
again.
—
Reply to this email directly, view it on GitHub
<#1994 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AABZ3H65HO2M3QFI3BNI6BTXEH7CBANCNFSM6AAAAAAXAKKZYM>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
--
Regards
Warren Moxley
604-369-0420
|
temperature is one of those constrains that the parent agent should be aware of (#3466), and be able to adjust as needed |
The rest of those constraints/quota awareness sounds good, but since GPT
temperature directly controls the "creativeness" of the outputs you get,
I'm not sure I would want it changing that on it's own. I haven't had the
opportunity to test Auto-GPT against GPT-4 yet, but I suspect part of these
loop issues are the dumbing down of 3.5-Turbo which seems to be happening
as it gets busier.
…On Wed, May 3, 2023 at 9:22 AM Boostrix ***@***.***> wrote:
temperature is one of those constrains that the parent agent should be
aware of (#3466
<#3466>), and be
able to adjust as needed
—
Reply to this email directly, view it on GitHub
<#1994 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AABZ3HZVS4SQV75HC3D4GJTXEKA4XANCNFSM6AAAAAAXAKKZYM>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
--
Regards
Warren Moxley
604-369-0420
|
I get your point, but most people reporting endless/useless "looping" do end watching an agent/sub-agent that is obviously lacking "creativity" ... thus, providing the option for an agent to control its sub-agent "temperature" may be more useful than we may think |
That's a fair point, I think if it were explained more clearly how/what
temp is, and helped people configure it for their use case at the beginning
it still might not be needed.
…On Wed, May 3, 2023 at 11:37 AM Boostrix ***@***.***> wrote:
I get your point, but most people reporting endless/useless "looping" do
end watching an agent/sub-agent that is obviously lacking "creativity" ...
thus, providing the option for an agent to control its sub-agent
"temperature" may be more useful than we may think
—
Reply to this email directly, view it on GitHub
<#1994 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AABZ3HYMDPHUDHWU3BEBGLTXEKQXDANCNFSM6AAAAAAXAKKZYM>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
--
Regards
Warren Moxley
604-369-0420
|
Can you tell me where I can set the temperature? I'm not seeing where this is done. Thanks! |
@burtonsports its in .env file |
Nevermind, found it. |
Thank you! Totally missed it on first glance lol |
Please report back with your results as well. Thanks!
…On Thu, May 4, 2023 at 1:38 PM burtonsports ***@***.***> wrote:
Can you tell me where I can set the temperature? I'm not seeing where this
is done. Thanks!
@burtonsports <https://github.com/burtonsports> its in .env file
Thank you! Totally missed it on first glance lol
—
Reply to this email directly, view it on GitHub
<#1994 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AABZ3HY3QB7QH2YNHNJ7HU3XEQHVHANCNFSM6AAAAAAXAKKZYM>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
--
Regards
Warren Moxley
604-369-0420
|
I suppose that would be one of the most straightforward pull requests: a change to add a comment to the env file explaining what the TEMPERATURE setting is doing |
This issue has automatically been marked as stale because it has not had any activity in the last 50 days. You can unstale it by commenting or removing the label. Otherwise, this issue will be closed in 10 days. |
This issue was closed automatically because it has been stale for 10 days with no activity. |
GPT-3 or GPT-4
Steps to reproduce 🕹
Provide the prompt to autoGPT along with goals (mentioned in "Your prompts").
Then just continue accepting the next action (y) or next N actions.
It will stuck in a loop.
Current behavior 😯
It loops the same queries although it successfully got the google results:
2023-04-16 21:38:56,496 INFO NEXT ACTION: COMMAND = google ARGUMENTS = {'input': 'personal income tax rates in countries with low business tax rates'}
2023-04-16 21:39:21,536 INFO NEXT ACTION: COMMAND = google ARGUMENTS = {'input': 'personal income tax rates in countries with low business tax rates'}
2023-04-16 21:39:46,623 INFO NEXT ACTION: COMMAND = google ARGUMENTS = {'input': 'personal income tax rates in countries with low business tax rates'}
When i provided input to break the loop, it tried something else, but then got back to the loop.
Expected behavior 🤔
It should remember previous searches/queries and skip them, change them.
Your prompt 📝
The text was updated successfully, but these errors were encountered: