Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

reproduceable crash when summarizing multiple chunks of large file #3224

Closed
1 task done
SvenMeyer opened this issue Apr 25, 2023 · 7 comments
Closed
1 task done

reproduceable crash when summarizing multiple chunks of large file #3224

SvenMeyer opened this issue Apr 25, 2023 · 7 comments
Labels
bug Something isn't working needs investigation

Comments

@SvenMeyer
Copy link

SvenMeyer commented Apr 25, 2023

⚠️ Search for existing issues first ⚠️

  • I have searched the existing issues, and there is no existing issue for my problem

Which Operating System are you using?

Linux

Which version of Auto-GPT are you using?

Latest Release

GPT-3 or GPT-4?

GPT-3.5

Steps to reproduce 🕹

see prompt

Current behavior 😯

crash

NEXT ACTION:  COMMAND = browse_website ARGUMENTS = {'url': 'https://docs.centrifuge.io/build/tinlake/', 'question': 'What is the overall architecture of the Tinlake system?'}
Text length: 50733 characters
Adding chunk 1 / 5 to memory
Summarizing chunk 1 / 5 of length 12070 characters, or 2990 tokens
Added chunk 1 summary to memory, of length 846 characters
Adding chunk 2 / 5 to memory
Summarizing chunk 2 / 5 of length 12788 characters, or 2930 tokens
Added chunk 2 summary to memory, of length 692 characters
Adding chunk 3 / 5 to memory
Summarizing chunk 3 / 5 of length 11341 characters, or 2974 tokens
SYSTEM:  Command browse_website returned: Error: The server is overloaded or not ready yet.
Traceback (most recent call last):
  File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/home/sum/DEV/AI/Auto-GPT/Auto-GPT-0.2.2/autogpt/__main__.py", line 5, in <module>
    autogpt.cli.main()
  File "/usr/lib/python3.10/site-packages/click/core.py", line 1130, in __call__
    return self.main(*args, **kwargs)
  File "/usr/lib/python3.10/site-packages/click/core.py", line 1055, in main
    rv = self.invoke(ctx)
  File "/usr/lib/python3.10/site-packages/click/core.py", line 1635, in invoke
    rv = super().invoke(ctx)
  File "/usr/lib/python3.10/site-packages/click/core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/usr/lib/python3.10/site-packages/click/core.py", line 760, in invoke
    return __callback(*args, **kwargs)
  File "/usr/lib/python3.10/site-packages/click/decorators.py", line 26, in new_func
    return f(get_current_context(), *args, **kwargs)
  File "/home/sum/DEV/AI/Auto-GPT/Auto-GPT-0.2.2/autogpt/cli.py", line 151, in main
    agent.start_interaction_loop()
  File "/home/sum/DEV/AI/Auto-GPT/Auto-GPT-0.2.2/autogpt/agent/agent.py", line 75, in start_interaction_loop
    assistant_reply = chat_with_ai(
  File "/home/sum/DEV/AI/Auto-GPT/Auto-GPT-0.2.2/autogpt/chat.py", line 159, in chat_with_ai
    assistant_reply = create_chat_completion(
  File "/home/sum/DEV/AI/Auto-GPT/Auto-GPT-0.2.2/autogpt/llm_utils.py", line 93, in create_chat_completion
    response = openai.ChatCompletion.create(
  File "/home/sum/.local/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create
    return super().create(*args, **kwargs)
  File "/home/sum/.local/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
  File "/home/sum/.local/lib/python3.10/site-packages/openai/api_requestor.py", line 226, in request
    resp, got_stream = self._interpret_response(result, stream)
  File "/home/sum/.local/lib/python3.10/site-packages/openai/api_requestor.py", line 619, in _interpret_response
    self._interpret_response_line(
  File "/home/sum/.local/lib/python3.10/site-packages/openai/api_requestor.py", line 662, in _interpret_response_line
    raise error.ServiceUnavailableError(
openai.error.ServiceUnavailableError: The server is overloaded or not ready yet.

Expected behavior 🤔

no crash

Your prompt 📝

can you clone the tinlake repository from centrifuge https://github.com/centrifuge/tinlake and have a look at the .sol files in the src folder. That may give you an idea how cwntrifuge has implemented RWA tokenization and using them as collateral. Therafter continue with your privious research. I think this webpage which you have found just shortly is a great resources to continue if that wasn't the plan anyway https://www.whitecase.com/insight-our-thinking/rise-digital-finance-tokenising-mining-metals-assets

Your Logs 📒

see attachment
activity.log
error.log

@k-boikov k-boikov added bug Something isn't working needs investigation labels Apr 25, 2023
@Pwuts

This comment was marked as outdated.

@Pwuts Pwuts marked this as a duplicate of #2937 Apr 26, 2023
@Pwuts Pwuts closed this as not planned Won't fix, can't repro, duplicate, stale Apr 26, 2023
@Pwuts
Copy link
Member

Pwuts commented Apr 26, 2023

This is probably the result of using a free account. Please follow the instructions and set up a paid account. Free accounts can only make 3 requests per minute, which is what breaks the bot in this case.

@SvenMeyer
Copy link
Author

@Pwuts actually I have a paid account, although only access to GPT 3.5 as of now, and thus start it in only GPT-3 mode.
I also can't see any error message which indicates that it's not running on a paid account.
Did I miss anything ?

@Pwuts
Copy link
Member

Pwuts commented Apr 27, 2023

Do you have ChatGPT Plus, or also a paid API account? (Those are separate.)

@SvenMeyer
Copy link
Author

@Pwuts I have a paid API key account.

@javableu
Copy link
Contributor

+1. Same error.

@davidsolal
Copy link

+1 Same. GPT Plus user and API paid account :
Traceback (most recent call last): File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/home/dev/Setup/Auto-GPT/autogpt/__main__.py", line 5, in <module> autogpt.cli.main() File "/usr/local/lib/python3.10/dist-packages/click/core.py", line 1130, in __call__ return self.main(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/click/core.py", line 1055, in main rv = self.invoke(ctx) File "/usr/local/lib/python3.10/dist-packages/click/core.py", line 1635, in invoke rv = super().invoke(ctx) File "/usr/local/lib/python3.10/dist-packages/click/core.py", line 1404, in invoke return ctx.invoke(self.callback, **ctx.params) File "/usr/local/lib/python3.10/dist-packages/click/core.py", line 760, in invoke return __callback(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/click/decorators.py", line 26, in new_func return f(get_current_context(), *args, **kwargs) File "/home/dev/Setup/Auto-GPT/autogpt/cli.py", line 96, in main run_auto_gpt( File "/home/dev/Setup/Auto-GPT/autogpt/main.py", line 197, in run_auto_gpt agent.start_interaction_loop() File "/home/dev/Setup/Auto-GPT/autogpt/agent/agent.py", line 130, in start_interaction_loop assistant_reply = chat_with_ai( File "/home/dev/Setup/Auto-GPT/autogpt/llm/chat.py", line 193, in chat_with_ai assistant_reply = create_chat_completion( File "/home/dev/Setup/Auto-GPT/autogpt/llm/utils/__init__.py", line 53, in metered_func return func(*args, **kwargs) File "/home/dev/Setup/Auto-GPT/autogpt/llm/utils/__init__.py", line 87, in _wrapped return func(*args, **kwargs) File "/home/dev/Setup/Auto-GPT/autogpt/llm/utils/__init__.py", line 235, in create_chat_completion response = api_manager.create_chat_completion( File "/home/dev/Setup/Auto-GPT/autogpt/llm/api_manager.py", line 61, in create_chat_completion response = openai.ChatCompletion.create( File "/usr/local/lib/python3.10/dist-packages/openai/api_resources/chat_completion.py", line 25, in create return super().create(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create response, _, api_key = requestor.request( File "/usr/local/lib/python3.10/dist-packages/openai/api_requestor.py", line 226, in request resp, got_stream = self._interpret_response(result, stream) File "/usr/local/lib/python3.10/dist-packages/openai/api_requestor.py", line 619, in _interpret_response self._interpret_response_line( File "/usr/local/lib/python3.10/dist-packages/openai/api_requestor.py", line 662, in _interpret_response_line raise error.ServiceUnavailableError( openai.error.ServiceUnavailableError: The server is overloaded or not ready yet.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working needs investigation
Projects
None yet
Development

No branches or pull requests

5 participants