Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Maximum context length exceeded after get_hyperlinks #2906

Closed
1 task done
bobinson opened this issue Apr 22, 2023 · 4 comments · Fixed by #3222
Closed
1 task done

Maximum context length exceeded after get_hyperlinks #2906

bobinson opened this issue Apr 22, 2023 · 4 comments · Fixed by #3222
Labels
bug Something isn't working function: browse

Comments

@bobinson
Copy link

bobinson commented Apr 22, 2023

  • I have searched the existing issues, and there is no existing issue for my problem

Which Operating System are you using?

MacOS

macOS on M2

Python 3.11.3 (Darwin MacBook-Pro.local 22.4.0 Darwin Kernel Version 22.4.0: Mon Mar 6 21:01:02 PST 2023; root:xnu-8796.101.5~3/RELEASE_ARM64_T8112 arm64)

Which version of Auto-GPT are you using?

Master (branch)

Git commit hash b4bd11d708e076c67e426776a76cee8f27d04327 on master

GPT-3 or GPT-4?

GPT-3.5

Steps to reproduce 🕹

  • Install all the dependencies and run as python -m autogpt -c

  • Give the prompt

  • The software connects to Redis, fethes previous session and continues.

The original prompts were:

Goal 1: Please analyze the home page of http://mathrubhumi.com
Goal 2: Provide feedback on the website's design and functionality

Similar issues were reported and fixed in

  1. why it shows " This model's maximum context length is 8191 tokens ? " #2366
  2. This model's maximum context length is 8191 tokens, however you requested 89686 tokens (89686 in your prompt) #1639
  3. and a possible related issue is Maximum context length exceeded after browse_website #796

Current behavior 😯

Ungracefully crashes complaining maximum context length exceeded

Expected behavior 🤔

If the token length has exceeded an informational message must be printed and gracefully exit. ie, we need better error handling.

Your prompt 📝

Its a continuation of a previous session

image

[03:38 PM] bobinson@MacBook-Pro 🖖  [~/src/Auto-GPT]python -m autogpt -c
Error creating Redis search index:  Index already exists
Continuous Mode:  ENABLED
WARNING:  Continuous mode is not recommended. It is potentially dangerous and may cause your AI to run forever or carry out actions you would not usually authorise. Use at your own risk.
NEWS:  Welcome to Auto-GPT! We'll keep you informed of the latest news and features by printing messages here. If you don't wish to see this message, you can run Auto-GPT with the --skip-news flag
WARNING:  You are running on `master` branch - this is not a supported branch.
Welcome back!  Would you like me to return to being Entrepreneur-GPT?
Continue with the last settings?
Name:  Entrepreneur-GPT
Role:  an AI designed to autonomously develop and run businesses with the
ge of the website http://mathrubhumi.com and provide feedback']
Continue (y/n): y
Using memory of type:  RedisMemory
Using Browser:  chrome
 THOUGHTS:  I will start by analyzing the home page of http://mathrubhumi.com to provide feedback.
REASONING:  Analyzing the home page will give me an idea of the website's design, layout, and content, which will help me determine if there are any areas that need improvement.
PLAN:
-  Analyze the home page of http://mathrubhumi.com
-  Provide feedback on the website's design, layout, and content
CRITICISM:  I need to ensure that my feedback is constructive and actionable.
NEXT ACTION:  COMMAND = browse_website ARGUMENTS = {'url': 'http://mathrubhumi.com', 'question': "Provide feedback on the website's design, layout, and content."}
Text length: 8180 characters
Adding chunk 1 / 3 to memory
Summarizing chunk 1 / 3 of length 3078 characters, or 2975 tokens

Your Logs 📒

Environment:

macOS on M2

Python 3.11.3 (Darwin MacBook-Pro.local 22.4.0 Darwin Kernel Version 22.4.0: Mon Mar 6 21:01:02 PST 2023; root:xnu-8796.101.5~3/RELEASE_ARM64_T8112 arm64)

Git commit hash b4bd11d708e076c67e426776a76cee8f27d04327 on master

SYSTEM:  Command browse_website returned: ("Answer gathered from website: The text does not provide information about the design and user experience of the Mathrubhumi website. It contains news articles and features on various topics such as politics, environment, sports, and entertainment. Some articles are available for free, while others require a premium subscription. The website also includes special pages for events like Vishu, Ramzan, and IPL 2023. \n \n Links: ['\\n\\n (javascript:void(0))', '\\n\\n\\n (https://www.mathrubhumi.com/)', '\\nMALAYALAM (http://mathrubhumi.com/)', '\\nENGLISH (https://english.mathrubhumi.com/)', '\\nNewspaper (https://newspaper.mathrubhumi.com/)']", <selenium.webdriver.chrome.webdriver.WebDriver (session="f00359f317ac17df84fd2258bd7da2ef")>)
- Thinking... an read more here: https://github.com/Significant-Gravitas/Auto-GPT#openai-api-keys-configuration
 THOUGHTS:  Based on the information gathered, I suggest we use the 'get_hyperlinks' command to get a list of hyperlinks on the Mathrubhumi website.
REASONING:  Getting a list of hyperlinks will allow us to explore the website in more detail and gain a better understanding of its design and user experience.
PLAN:
-  Use the 'get_hyperlinks' command to get a list of hyperlinks on the Mathrubhumi website.
CRITICISM:  I need to ensure that I am thorough in my exploration of the website and not overlook any important information.
NEXT ACTION:  COMMAND = get_hyperlinks ARGUMENTS = {'url': 'http://mathrubhumi.com'}
Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "/Users/bbpbsa/src/Auto-GPT/autogpt/__main__.py", line 5, in <module>
    autogpt.cli.main()
  File "/opt/homebrew/lib/python3.11/site-packages/click/core.py", line 1130, in __call__
    return self.main(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/click/core.py", line 1055, in main
    rv = self.invoke(ctx)
         ^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/click/core.py", line 1635, in invoke
    rv = super().invoke(ctx)
         ^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/click/core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/click/core.py", line 760, in invoke
    return __callback(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/click/decorators.py", line 26, in new_func
    return f(get_current_context(), *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/bbpbsa/src/Auto-GPT/autogpt/cli.py", line 177, in main
    agent.start_interaction_loop()
  File "/Users/bbpbsa/src/Auto-GPT/autogpt/agent/agent.py", line 213, in start_interaction_loop
    self.memory.add(memory_to_add)
  File "/Users/bbpbsa/src/Auto-GPT/autogpt/memory/redismem.py", line 91, in add
    vector = create_embedding_with_ada(data)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/bbpbsa/src/Auto-GPT/autogpt/llm_utils.py", line 170, in create_embedding_with_ada
    return openai.Embedding.create(
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/openai/api_resources/embedding.py", line 33, in create
    response = super().create(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
                           ^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/openai/api_requestor.py", line 226, in request
    resp, got_stream = self._interpret_response(result, stream)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/openai/api_requestor.py", line 619, in _interpret_response
    self._interpret_response_line(
  File "/opt/homebrew/lib/python3.11/site-packages/openai/api_requestor.py", line 682, in _interpret_response_line
    raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 8191 tokens, however you requested 11945 tokens (11945 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.
@Pwuts
Copy link
Member

Pwuts commented Apr 22, 2023

Fixed in #2542 Nevermind, not for the exact command you're having issues with.

@Pwuts Pwuts closed this as completed Apr 22, 2023
@Pwuts Pwuts reopened this Apr 22, 2023
@Pwuts
Copy link
Member

Pwuts commented Apr 22, 2023

Prompt overflow handling:

@Pwuts Pwuts changed the title Ungraceful crash on exceeding model's maximum context length get_hyperlinks prompt overflow Apr 22, 2023
@bobinson
Copy link
Author

Fixed in #2542 Nevermind, not for the exact command you're having issues with.

I had seen the PR and thought this one is a different one and thus opened :)

@Pwuts Pwuts changed the title get_hyperlinks prompt overflow Maximum context length exceeded after get_hyperlinks Apr 22, 2023
@Pwuts Pwuts added the bug Something isn't working label Apr 22, 2023
@rocks6
Copy link
Contributor

rocks6 commented Apr 23, 2023

We may want to add an offset param to the command, such that offset=0 gets the first N records, offset=1 gets the next N records etc. I think the current LLMs would be able to understand this

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working function: browse
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants