Skip to content

Commit

Permalink
Merge branch 'cn-dev' into cn
Browse files Browse the repository at this point in the history
* cn-dev: (64 commits)
  针对国内的情况,翻译成中文
  Update challenge scores
  Update version numbers for v0.4.0 release
  Add `replace_in_file` command (Significant-Gravitas#4565)
  Update bulletin with highlights for v0.4.0 release (Significant-Gravitas#4576)
  Skip flaky challenges (Significant-Gravitas#4573)
  Fix `test_web_selenium` (Significant-Gravitas#4554)
  Clean up CI git logic
  remove information retrieval challenge b from beaten challenges
  Fix CI git authentication and cassettes
  debug
  Fix CI git diff
  Fix CI git authorization
  Update submodule reference
  Update current score
  Cache Python Packages in the CI pipeline (Significant-Gravitas#4488)
  Fix pushing cassettes in CI
  Remove news about config (Significant-Gravitas#4553)
  Fix CI for internal PRs with CI changes (Significant-Gravitas#4552)
  Update BULLETIN.md
  ...

# Conflicts:
#	BULLETIN.md
#	CONTRIBUTING.md
#	autogpt/agent/agent.py
#	autogpt/app.py
#	autogpt/llm/llm_utils.py
  • Loading branch information
kuwork committed Jun 12, 2023
1 parent 25a7957 commit 7dbedd3
Show file tree
Hide file tree
Showing 13 changed files with 177 additions and 183 deletions.
79 changes: 34 additions & 45 deletions BULLETIN.md
Original file line number Diff line number Diff line change
@@ -1,47 +1,36 @@
# Website and Documentation Site 📰📖
Check out *https://agpt.co*, the official news & updates site for Auto-GPT!
The documentation also has a place here, at *https://docs.agpt.co*

# For contributors 👷🏼
Since releasing v0.3.0, we are working on re-architecting the Auto-GPT core to make
it more extensible and to make room for structural performance-oriented R&D.
In the meantime, we have less time to process incoming pull requests and issues,
so we focus on high-value contributions:
* significant bugfixes
* *major* improvements to existing functionality and/or docs (so no single-typo fixes)
* contributions that help us with re-architecture and other roadmapped items
We have to be somewhat selective in order to keep making progress, but this does not
mean you can't contribute. Check out the contribution guide on our wiki:
# 网站和文档站点 📰📖
请查看*https://agpt.co*,这是Auto-GPT官方的新闻和更新站点!文档也在这里,位于*https://docs.agpt.co*

# 对于贡献者 👷🏼
自发布v0.3.0以来,我们一直在重新构建Auto-GPT核心,使其更具可扩展性,并为结构性能导向的研发腾出空间。
同时,我们有较少的时间处理传入的拉取请求和问题,因此我们专注于高价值的贡献:
* 重大的错误修复
* 对现有功能和/或文档的*主要*改进(因此不包括单个拼写错误修复)
* 对我们进行重新架构和其他路线图条目的贡献
为了不断取得进展,我们必须有所选择,但这并不意味着您无法做出贡献。请查看我们维基上的贡献指南:
https://github.com/Significant-Gravitas/Auto-GPT/wiki/Contributing

# 🚀 v0.4.0 Release 🚀
Two weeks and 76 pull requests have passed since v0.3.1, and we are happy to announce
the release of v0.4.0!

Highlights and notable changes since v0.3.0:

## ⚠️ Command `send_tweet` is REMOVED
Twitter functionality (and more) is now covered by plugins.

## ⚠️ Memory backend deprecation 💾
The Milvus, Pinecone and Weaviate memory backends were rendered incompatible
by work on the memory system, and have been removed in `master`. The Redis
memory store was also temporarily removed; we will merge a new implementation ASAP.
Whether built-in support for the others will be added back in the future is subject to
discussion, feel free to pitch in: https://github.com/Significant-Gravitas/Auto-GPT/discussions/4280

## Document support in `read_file` 📄
Auto-GPT can now read text from document files, with support added for PDF, DOCX, CSV,
HTML, TeX and more!

## Managing Auto-GPT's access to commands ❌🔧
You can now disable set of built-in commands through the *DISABLED_COMMAND_CATEGORIES*
variable in .env. Specific shell commands can also be disabled using *DENY_COMMANDS*,
or selectively enabled using *ALLOW_COMMANDS*.

## Further fixes and changes 🛠️
Other highlights include improvements to self-feedback mode and continuous mode,
documentation, docker and devcontainer setups, and much more. Most of the improvements
that were made are not yet visible to users, but will pay off in the long term.
Take a look at the Release Notes on Github for the full changelog!
https://github.com/Significant-Gravitas/Auto-GPT/releases
# 🚀 v0.4.0 发布 🚀
自v0.3.1版本发布以来,两周时间过去了,共有76个请求合并,我们很高兴地宣布
发布了v0.4.0!

自v0.3.0以来的亮点和显着变化:

## ⚠️ 命令 `send_tweet` 已被删除
Twitter 功能(及更多功能)现在已由插件覆盖。

## ⚠️ 内存后端弃用 💾
Milvus、Pinecone 和 Weaviate 内存后端与内存系统的工作不兼容,已在主分支中删除。Redis
内存存储也暂时删除;我们将尽快合并新的实现。是否在未来再次添加其他后端的内置支持还有待讨论,请随时加入: https://github.com/Significant-Gravitas/Auto-GPT/discussions/4280

## `read_file` 支持文档 📄
Auto-GPT 现在可以从文档文件中读取文本,支持 PDF、DOCX、CSV、HTML、TeX 等格式!

## 管理 Auto-GPT 对命令的访问 ❌🔧
您现在可以通过.env 中的 *DISABLED_COMMAND_CATEGORIES* 变量禁用一组内置命令。也可以使用 *DENY_COMMANDS* 禁用特定的 shell 命令,
或使用 *ALLOW_COMMANDS* 有选择地启用它们。

## 更多修复和更改 🛠️
其他亮点包括 self-feedback 模式和连续模式、文档、Docker 和 devcontainer 设置等方面的改进,以及许多其他改进。
大部分改进对用户尚不可见,但从长远来看将会有所回报。请查看Github上的发布说明获取完整的更新日志!
https://github.com/Significant-Gravitas/Auto-GPT/releases
16 changes: 8 additions & 8 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
We maintain a knowledgebase at this [wiki](https://github.com/Significant-Gravitas/Nexus/wiki)
我们在这个[wiki](https://github.com/Significant-Gravitas/Nexus/wiki)上维护着一个知识库。

We would like to say "We value all contributions". After all, we are an open-source project, so we should say something fluffy like this, right?
我们想说“我们珍视所有的贡献”。毕竟,我们是一个开源项目,所以我们应该说一些像这样的蜜汁话,对吗?

However the reality is that some contributions are SUPER-valuable, while others create more trouble than they are worth and actually _create_ work for the core team.
但事实是,有些贡献非常有价值,而其他一些贡献则会带来更多麻烦,实际上会为核心团队带来更多工作。

If you wish to contribute, please look through the wiki [contributing](https://github.com/Significant-Gravitas/Nexus/wiki/Contributing) page.
如果您想要贡献,请查看维基的[contributing](https://github.com/Significant-Gravitas/Nexus/wiki/Contributing)页面。

If you wish to involve with the project (beyond just contributing PRs), please read the wiki [catalyzing](https://github.com/Significant-Gravitas/Nexus/wiki/Catalyzing) page.
如果您想要参与到项目中(不仅仅是提交PR),请阅读维基的[catalyzing](https://github.com/Significant-Gravitas/Nexus/wiki/Catalyzing)页面。

In fact, why not just look through the whole wiki (it's only a few pages) and hop on our discord (you'll find it in the wiki).
事实上,为什么不直接浏览整个wiki(只有几页),并加入我们的Discord(你会在维基中找到它)呢?

❤️ & 🔆
The team @ Auto-GPT
❤️ & 🔆
Auto-GPT团队
10 changes: 7 additions & 3 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
ARG BUILD_TYPE=dev

# Use an official Python base image from the Docker Hub
FROM python:3.10-slim AS autogpt-base
FROM python:3.11-slim AS autogpt-base

# Install browsers
RUN apt-get update && apt-get install -y \
Expand All @@ -15,7 +15,9 @@ RUN apt-get install -y curl jq wget git
# Set environment variables
ENV PIP_NO_CACHE_DIR=yes \
PYTHONUNBUFFERED=1 \
PYTHONDONTWRITEBYTECODE=1
PYTHONDONTWRITEBYTECODE=1 \
HTTP_PROXY=http://192.168.1.211:7890 \
HTTPS_PROXY=http://192.168.1.211:7890

# Install the required python packages globally
ENV PATH="$PATH:/root/.local/bin"
Expand All @@ -26,13 +28,15 @@ ENTRYPOINT ["python", "-m", "autogpt", "--install-plugin-deps"]

# dev build -> include everything
FROM autogpt-base as autogpt-dev
RUN pip install --no-cache-dir -r requirements.txt
RUN pip config set global.index-url https://mirrors.aliyun.com/pypi/simple && \
pip install --no-cache-dir -r requirements.txt
WORKDIR /app
ONBUILD COPY . ./

# release build -> include bare minimum
FROM autogpt-base as autogpt-release
RUN sed -i '/Items below this point will not be included in the Docker Image/,$d' requirements.txt && \
pip config set global.index-url https://mirrors.aliyun.com/pypi/simple && \
pip install --no-cache-dir -r requirements.txt
WORKDIR /app
ONBUILD COPY autogpt/ ./autogpt
Expand Down
52 changes: 26 additions & 26 deletions autogpt/agent/agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -122,11 +122,11 @@ def signal_handler(signum, frame):
and self.cycle_count > cfg.continuous_limit
):
logger.typewriter_log(
"Continuous Limit Reached: ", Fore.YELLOW, f"{cfg.continuous_limit}"
"连续达到限制: ", Fore.YELLOW, f"{cfg.continuous_limit}"
)
break
# Send message to AI, get response
with Spinner("Thinking... ", plain_output=cfg.plain_output):
with Spinner("正在思考... ", plain_output=cfg.plain_output):
assistant_reply = chat_with_ai(
cfg,
self,
Expand All @@ -152,7 +152,7 @@ def signal_handler(signum, frame):
)
command_name, arguments = get_command(assistant_reply_json)
if cfg.speak_mode:
say_text(f"I want to execute {command_name}")
say_text(f"我要执行 {command_name}")

arguments = self._resolve_pathlike_command_args(arguments)

Expand All @@ -167,10 +167,10 @@ def signal_handler(signum, frame):
)

logger.typewriter_log(
"NEXT ACTION: ",
"下一步操作: ",
Fore.CYAN,
f"COMMAND = {Fore.CYAN}{command_name}{Style.RESET_ALL} "
f"ARGUMENTS = {Fore.CYAN}{arguments}{Style.RESET_ALL}",
f"指令 = {Fore.CYAN}{command_name}{Style.RESET_ALL} "
f"参数 = {Fore.CYAN}{arguments}{Style.RESET_ALL}",
)

if not cfg.continuous_mode and self.next_action_count == 0:
Expand All @@ -179,23 +179,23 @@ def signal_handler(signum, frame):
# to exit
self.user_input = ""
logger.info(
"Enter 'y' to authorise command, 'y -N' to run N continuous commands, 's' to run self-feedback commands, "
"'n' to exit program, or enter feedback for "
"输入'y'授权命令,'y -N'运行N个连续命令, 's' 运行自反馈命令, "
"'n' 退出程序, 或者 输入反馈内容给 "
f"{self.ai_name}..."
)
while True:
if cfg.chat_messages_enabled:
console_input = clean_input("Waiting for your response...")
console_input = clean_input("等待您的反馈...")
else:
console_input = clean_input(
Fore.MAGENTA + "Input:" + Style.RESET_ALL
Fore.MAGENTA + "输入:" + Style.RESET_ALL
)
if console_input.lower().strip() == cfg.authorise_key:
user_input = "GENERATE NEXT COMMAND JSON"
break
elif console_input.lower().strip() == "s":
logger.typewriter_log(
"-=-=-=-=-=-=-= THOUGHTS, REASONING, PLAN AND CRITICISM WILL NOW BE VERIFIED BY AGENT -=-=-=-=-=-=-=",
"-=-=-=-=-=-=-= 想法、推理、计划和意见现在将由代理人验证 -=-=-=-=-=-=-=",
Fore.GREEN,
"",
)
Expand All @@ -204,15 +204,15 @@ def signal_handler(signum, frame):
thoughts, cfg.fast_llm_model
)
logger.typewriter_log(
f"SELF FEEDBACK: {self_feedback_resp}",
f"自反馈: {self_feedback_resp}",
Fore.YELLOW,
"",
)
user_input = self_feedback_resp
command_name = "self_feedback"
break
elif console_input.lower().strip() == "":
logger.warn("Invalid input format.")
logger.warn("输入格式无效.")
continue
elif console_input.lower().startswith(f"{cfg.authorise_key} -"):
try:
Expand All @@ -222,8 +222,8 @@ def signal_handler(signum, frame):
user_input = "GENERATE NEXT COMMAND JSON"
except ValueError:
logger.warn(
"Invalid input format. Please enter 'y -n' where n is"
" the number of continuous tasks."
"输入格式无效. 请输入 'y -n' 其中 n "
" 连续运行的任务数量."
)
continue
break
Expand All @@ -244,26 +244,26 @@ def signal_handler(signum, frame):

if user_input == "GENERATE NEXT COMMAND JSON":
logger.typewriter_log(
"-=-=-=-=-=-=-= COMMAND AUTHORISED BY USER -=-=-=-=-=-=-=",
"-=-=-=-=-=-=-= 用户授权的命令 -=-=-=-=-=-=-=",
Fore.MAGENTA,
"",
)
elif user_input == "EXIT":
logger.info("Exiting...")
logger.info("退出中...")
break
else:
# Print authorized commands left value
logger.typewriter_log(
f"{Fore.CYAN}AUTHORISED COMMANDS LEFT: {Style.RESET_ALL}{self.next_action_count}"
f"{Fore.CYAN}用户授权的命令 剩余: {Style.RESET_ALL}{self.next_action_count}"
)

# Execute command
if command_name is not None and command_name.lower().startswith("error"):
result = f"Could not execute command: {arguments}"
result = f"无法执行命令: {arguments}"
elif command_name == "human_feedback":
result = f"Human feedback: {user_input}"
result = f"人工反馈: {user_input}"
elif command_name == "self_feedback":
result = f"Self feedback: {user_input}"
result = f"人工反馈: {user_input}"
else:
for plugin in cfg.plugins:
if not plugin.can_handle_pre_command():
Expand All @@ -278,7 +278,7 @@ def signal_handler(signum, frame):
self.config.prompt_generator,
config=cfg,
)
result = f"Command {command_name} returned: " f"{command_result}"
result = f"命令 {command_name} 返回了: " f"{command_result}"

result_tlength = count_string_tokens(
str(command_result), cfg.fast_llm_model
Expand All @@ -287,8 +287,8 @@ def signal_handler(signum, frame):
str(self.history.summary_message()), cfg.fast_llm_model
)
if result_tlength + memory_tlength + 600 > cfg.fast_token_limit:
result = f"Failure: command {command_name} returned too much output. \
Do not execute this command again with the same arguments."
result = f"失败: 命令 {command_name} 返回太多内容. \
请不要再用相同的参数执行这个指令."

for plugin in cfg.plugins:
if not plugin.can_handle_post_command():
Expand All @@ -303,9 +303,9 @@ def signal_handler(signum, frame):
self.history.add("system", result, "action_result")
logger.typewriter_log("SYSTEM: ", Fore.YELLOW, result)
else:
self.history.add("system", "Unable to execute command", "action_result")
self.history.add("system", "无法执行命令", "action_result")
logger.typewriter_log(
"SYSTEM: ", Fore.YELLOW, "Unable to execute command"
"SYSTEM: ", Fore.YELLOW, "无法执行命令"
)

def _resolve_pathlike_command_args(self, command_args):
Expand Down
10 changes: 5 additions & 5 deletions autogpt/app.py
Original file line number Diff line number Diff line change
Expand Up @@ -184,21 +184,21 @@ def start_agent(name: str, task: str, prompt: str, config: Config, model=None) -
# Remove underscores from name
voice_name = name.replace("_", " ")

first_message = f"""You are {name}. Respond with: "Acknowledged"."""
agent_intro = f"{voice_name} here, Reporting for duty!"
first_message = f"""你是{name}。回复:“已确认”."""
agent_intro = f"{voice_name} 在这里, 确认收到!"

# Create agent
if config.speak_mode:
say_text(agent_intro, 1)
key, ack = agent_manager.create_agent(task, first_message, model)

if config.speak_mode:
say_text(f"Hello {voice_name}. Your task is as follows. {task}.")
say_text(f"你好 {voice_name}. 你的任务如下. {task}.")

# Assign task (prompt), get response
agent_response = agent_manager.message_agent(key, prompt)

return f"Agent {name} created with key {key}. First response: {agent_response}"
return f"使用密钥 {key} 创建的代理 {name}。第一个响应:{agent_response}"


@command("message_agent", "Message GPT Agent", '"key": "<key>", "message": "<message>"')
Expand All @@ -208,7 +208,7 @@ def message_agent(key: str, message: str, config: Config) -> str:
if is_valid_int(key):
agent_response = AgentManager().message_agent(int(key), message)
else:
return "Invalid key, must be an integer."
return "无效键,必须是整数。"

# Speak response
if config.speak_mode:
Expand Down
10 changes: 5 additions & 5 deletions autogpt/logs.py
Original file line number Diff line number Diff line change
Expand Up @@ -269,11 +269,11 @@ def print_assistant_thoughts(
assistant_thoughts_criticism = assistant_thoughts.get("criticism")
assistant_thoughts_speak = assistant_thoughts.get("speak")
logger.typewriter_log(
f"{ai_name.upper()} THOUGHTS:", Fore.YELLOW, f"{assistant_thoughts_text}"
f"{ai_name.upper()} 想法:", Fore.YELLOW, f"{assistant_thoughts_text}"
)
logger.typewriter_log("REASONING:", Fore.YELLOW, f"{assistant_thoughts_reasoning}")
logger.typewriter_log("推理:", Fore.YELLOW, f"{assistant_thoughts_reasoning}")
if assistant_thoughts_plan:
logger.typewriter_log("PLAN:", Fore.YELLOW, "")
logger.typewriter_log("计划:", Fore.YELLOW, "")
# If it's a list, join it into a string
if isinstance(assistant_thoughts_plan, list):
assistant_thoughts_plan = "\n".join(assistant_thoughts_plan)
Expand All @@ -285,10 +285,10 @@ def print_assistant_thoughts(
for line in lines:
line = line.lstrip("- ")
logger.typewriter_log("- ", Fore.GREEN, line.strip())
logger.typewriter_log("CRITICISM:", Fore.YELLOW, f"{assistant_thoughts_criticism}")
logger.typewriter_log("意见:", Fore.YELLOW, f"{assistant_thoughts_criticism}")
# Speak the assistant's thoughts
if assistant_thoughts_speak:
if speak_mode:
say_text(assistant_thoughts_speak)
else:
logger.typewriter_log("SPEAK:", Fore.YELLOW, f"{assistant_thoughts_speak}")
logger.typewriter_log("发言:", Fore.YELLOW, f"{assistant_thoughts_speak}")
Loading

0 comments on commit 7dbedd3

Please sign in to comment.