-
Clone the repository:
git clone https://github.com/Arvid-pku/Godel_Agent.git cd Godel_Agent
-
Install dependencies via
pip
:pip install -r requirements.txt
-
Navigate to the source directory and run the main script:
cd src python main.py # You can adjust the task in the agent_module.py file.
-
datasets/
: This folder contains the datasets used in the experiments. -
results/
: Stores the self-optimized code generated by the model during each task, as well as the output generated by the model during tests. -
src/
: This folder contains the code implementation.main.py
: The entry point for running the agent.agent_module.py
: Core implementation of the Gödel Agent, including the self-awareness, self-modification, and action execution logic.task_*.py
: Evaluation scripts for each task/environment. (adapted from ADAS)logic.py
: Stores the generated agent code.wrap.py
: Used for debugging.goal_prompt.md
: Contains the goal prompt for the agent.
Make sure to configure your OpenAI API key in the key.env
file before running the code.
-
When you want to add a new action, you need to:
a. Add an entry to theagent.action_functions
list, including the name, parameters, return value, and functional description of the action you are adding, to facilitate LLM invocation.
b. Implement the action function.
c. (Optional) Emphasize it in thegoal_prompt
to encourage the agent to call this action under certain conditions. -
When you want to use the Godel Agent in a new environment, you need to:
a. Define some reward functions, where the input is the agent's policy and the output is feedback to the agent, such as scores, suggestions, or scenario descriptions.
b. Implement theaction_evaluate_on_task
function for that environment.
c. Implement the initial policy, which can be simple. The current implementation is thesolver
function (you can reflect the environment in which the policy is applied within thesolver
or describe the environment in thegoal_prompt
). -
If you want the Godel Agent to perform multiple tasks in a complex environment, you need to implement different initial solvers and different action_evaluate_on_task functions. Note that action_evaluate_on_task is also an action and needs to be added as described in the first point.
If you find our repository useful in your research, please kindly consider cite:
@misc{yin2024godelagentselfreferentialagent,
title={G\"odel Agent: A Self-Referential Agent Framework for Recursive Self-Improvement},
author={Xunjian Yin and Xinyi Wang and Liangming Pan and Xiaojun Wan and William Yang Wang},
year={2024},
eprint={2410.04444},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2410.04444},
}