Collection of tips for AutoGPT 💡 #1351
Replies: 6 comments
-
Beta Was this translation helpful? Give feedback.
-
7) Here i am talking about the context memory of LLM's in hope to contribute to the hopefully correct understanding of the exact bottleneck of the LLM's and its implications for AutoGPT's development to continually make progress on complex projects: Now the GPT4+other sources auto translated/enhancement of what i wrote there: (fail - so i pick and choose:) Task Decomposition: The idea is to break down the project into individual parts that do not need to be specified in the overarching context of the whole project. The AI system would just need to know what it needs to do at a given step, rather than needing to keep the entire project's context in mind. External Reminders / Prompts: In this strategy, the AI system would rely on reminders or prompts to guide its actions. These prompts can be elaborate instructions or notes that are quickly read and processed by the AI when needed. Because these AI systems can quickly comprehend and pick up tasks, the lack of long-term memory can be compensated with these reminders/system that is activated at the right time, a hyper organized, hyper structured approach to projects with freedom insight a tightly regulated highly structured system with multiple layers of fail saves and self-reflection cycles based on frequency of individual reports and markers of loops/progress stagnation/loss. A cycle of simple actions stacked and linked by rules that dynamically adjust to the LM's dynamic adaption/capability's in the short span of its short memory/context memory. |
Beta Was this translation helpful? Give feedback.
-
6.) Example of how a planning cycle could overcome context memory limitations in autoGPT: Goal 1: Goal 2: Goal 3: Goal 4: Goal 5: |
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
-
GoMightyAlgorythmGo! — 24.06.2023 14:06 GoMightyAlgorythmGo! — 25.06.2023 15:40 GoMightyAlgorythmGo! — 25.06.2023 18:25 GoMightyAlgorythmGo! — 26.06.2023 12:55 GoMightyAlgorythmGo! — 26.06.2023 13:45 samdcbu made a amazing post from the Voyager paper that definitely should go here into suggestions for the record/inspiration #helpAutoGPTtofinish #planing #context-memory GoMightyAlgorythmGo! — 26.06.2023 22:18
GoMightyAlgorythmGo! — 27.06.2023 17:19 AutoGPT internal command promt: "LM you can make timers by telling a other instance of a LM and it will remind you after Y Event happened or X token passed. [write what you need to reminded of here] [method of recall (like time/tokennumber/task finished/etc.)], [anything else you want the reminder model to tell you in case you forgot about the timer?]" (Pic's not representative - just roughly inspirational:) GPT4 translation-> 💡: Timers for Reminders/Externalized Information Retrieval: |
Beta Was this translation helpful? Give feedback.
-
Ideas/routh outlines:
for complex task under limited working-/context- memory and the challenges of long-term planing:
-It is important AutoGPT with limited working memory follows a sequence of steps by relying on written notes or external databases to guide it through its work across time, can be referred to as a "persistent sequential workflow" or a "stateful linear workflow." These terms emphasize the importance of maintaining and accessing persistent information to overcome memory limitations while following a predetermined, step-by-step order. Example in "6)"
So AutoGPT focuses on remembering the system (and creating it), but after that it is mostly sequential and he does not have to remember much but will still continue on his complex way to the endgoal.
-Implement a multi-agent system with specialized agent-teams to enhance efficiency and collaboration. Each team should focus on a specific task or area of expertise, and communicate effectively with other teams. This division of labor, inspired by company structures, will enable AutoGPT to tackle complex projects more effectively and make better use of its resources.
-You can write "You can decide to continue in JSON " (that will make him go into the "thoghts" "resoning" plan critice if he has nothing imorotant to say usually alternatively you can make it more a command, saves time
The "do_nothing" repeatition loop might be because he is planing but the plan changes and then he forgets and it goes on in cyrcles not a complete loop but a moving loop. (maybe he saves them to memory but not sure)
25.06.203:
💡 : AUTO-DETECT crashes in AutoGPT and sent the traceback/errormessage back to the AutoGPT team then use the simplest quickest to set up similarity sorting to have % of what the most common problem is at any given time, autoUPDATE to fix. This should increase AutoGPT engagement which would help funding, excitement and engagement. Too hard? Alternative: "If error -> send error msg to server -> save in txt with time and date -> now you can check out errors in real time, also you can send if the person has plugins enabled and what operating system and so on"
Example of how a planning cycle could overcome context memory limitations in autoGPT:" (scoll down to "6.)"
Planning for someone that has a 3 minute memory (current LLM's GPT4,GPT3.5t). (scroll down to "7" its long so its a invidual post)
Timer for autonomously made reminders by Runtime/specific events/token-lenght: (scorll to "8)" below)
Picture; visualization LLM context window breaking down stuff
If you are not gonna patent it then why bother hiding it if you can just open source it? Starting point for a conversation why we might not need/want to do extra work/be mad if big company X copy's your ideas: (scroll to 10) (might end up in philosophy discord, feel free to copy paste it or paraphrase)
Suggestions 💡: "a way to interrupt autoGPT to stop at the next cycle and wait for input":
(Made this suggestion a few months ago and it gained a lot of upvotes in the forum.) Example: y -100 (now autogpt will go for 100 rounds where you have no chance to stop it, suggenly you see that it googles icecream because it hallucinated, you try to stop it but there is no command where you press CTRL+? and autoGPT says "usercommand stoping at next cycle" where you can then fix the issue and give input to AutoGPT and so forth. 🌈 (Great job on consolidating some of your focus!)
Suggestion 💡 : Spoting and marking 🏷️ significant non code contributions:
Maybe batches :military_medal: or some post marker to know smth was significantly helpful, for good suggestions or inputs would be nice otherwise a person reading and copy pasting because they have [drug that increased dopamine] to withstand the boredom of coding would be hailed as sole hero, and the others would not be heared for other recommendations since they still had 0 contributions officially). Could be a threshold thing where when you make multible small. A few medium or 1 gigantic idea/thread that helps to revolutionize part of the development where the core team would agree and give you some forum or discord sticker to further maybe pay some attention to other stuff.
Beta Was this translation helpful? Give feedback.
All reactions