-
-
Notifications
You must be signed in to change notification settings - Fork 405
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
autogpt/llm_utils.py 137行代码可以考虑优化一下 #3
Comments
可以考虑,我看了下原项目也有这个问题,如果明天他们不修复的话,我再修复 |
响应是对方返回东西,存在情况比较多,可能是没网,可能对方网站异常,暂时不处理 |
kaqijiang
pushed a commit
that referenced
this issue
Apr 24, 2023
* refactor: 汉化prompt,减少gpt生成回复的冲突 (#1) * refactor: 优化triggering_prompt 1.修改triggering_prompt,降低gpt回复格式的出错率 2.一些代码语法的修改 * refactor: 优化triggering_prompt 1.修改triggering_prompt,降低gpt回复格式的出错率 2.一些代码语法的修改
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
return openai.Embedding.create(
input=[text], model="text-embedding-ada-002"
)["data"][0]["embedding"]
可能导致
File "/media/Data/data/bone_seg/Auto-GPT-ZH-0.2.1/autogpt/memory/local.py", line 75, in add
embedding = create_embedding_with_ada(text)
File "/media/Data/data/bone_seg/Auto-GPT-ZH-0.2.1/autogpt/llm_utils.py", line 137, in create_embedding_with_ada
return openai.Embedding.create(
File "/home/anaconda3/lib/python3.9/site-packages/openai/api_resources/embedding.py", line 33, in create
response = super().create(*args, **kwargs)
File "/home/anaconda3/lib/python3.9/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
File "/home/anaconda3/lib/python3.9/site-packages/openai/api_requestor.py", line 226, in request
resp, got_stream = self._interpret_response(result, stream)
File "/home/anaconda3/lib/python3.9/site-packages/openai/api_requestor.py", line 619, in _interpret_response
self._interpret_response_line(
File "/home/anaconda3/lib/python3.9/site-packages/openai/api_requestor.py", line 682, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 8191 tokens, however you requested 11300 tokens (11300 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.
解决思路:
1.限制此处text的长度,做个长度限制
2.直接把这块报错给expect掉
The text was updated successfully, but these errors were encountered: