Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The model: gpt-4 does not exist #12

Closed
stan-voo opened this issue Mar 31, 2023 · 15 comments · Fixed by #45
Closed

The model: gpt-4 does not exist #12

stan-voo opened this issue Mar 31, 2023 · 15 comments · Fixed by #45
Labels
good first issue Good for newcomers

Comments

@stan-voo
Copy link

Hi,
Amazing work you're doing here! ❤
I'm getting this message:

Traceback (most recent call last):
File "/Users/stas/Auto-GPT/AutonomousAI/main.py", line 154, in
assistant_reply = chat.chat_with_ai(prompt, user_input, full_message_history, mem.permanent_memory, token_limit)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/stas/Auto-GPT/AutonomousAI/chat.py", line 48, in chat_with_ai
response = openai.ChatCompletion.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/openai/api_resources/chat_completion.py", line 25, in create
return super().create(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/openai/api_requestor.py", line 226, in request
resp, got_stream = self._interpret_response(result, stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/openai/api_requestor.py", line 619, in _interpret_response
self._interpret_response_line(
File "/usr/local/lib/python3.11/site-packages/openai/api_requestor.py", line 679, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: The model: gpt-4 does not exist

Does it mean I don't have an API access to gpt-4 yet? How can I use previous versions?

@Torantulino
Copy link
Member

You can apply for access to the GPT-4 API here:

https://openai.com/waitlist/gpt-4-api

Unfortunately, in my testing when I've tried to run Auto-GPT using GPT3.5 it does not function at all as the model does not understand it's task.

@stan-voo
Copy link
Author

I'm playing with v3.5 thanks to @Koobah fork: https://github.com/Koobah/Auto-GPT
It seems to be able to do some things: https://www.loom.com/share/9ddf4b806dfb4a948c7d728669ebac28

Got stopped by this error:

Traceback (most recent call last):
File "/Users/stas/dev/Auto-GPT/AutonomousAI/main.py", line 134, in
assistant_reply = chat.chat_with_ai(prompt, user_input, full_message_history, mem.permanent_memory, token_limit)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/stas/dev/Auto-GPT/AutonomousAI/chat.py", line 50, in chat_with_ai
response = openai.ChatCompletion.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/openai/api_resources/chat_completion.py", line 25, in create
return super().create(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/openai/api_requestor.py", line 226, in request
resp, got_stream = self._interpret_response(result, stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/openai/api_requestor.py", line 619, in _interpret_response
self._interpret_response_line(
File "/usr/local/lib/python3.11/site-packages/openai/api_requestor.py", line 679, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 4121 tokens. Please reduce the length of the messages.

@stan-voo
Copy link
Author

It did it! Completed a task on v.3.5!
Here's a screencast if anybody is curious: https://www.loom.com/share/9bf888d9c925474899257d072f1a562f

@Torantulino
Copy link
Member

Torantulino commented Apr 1, 2023

Wow! 🤯
I didn't know that was possible, great work guys! @stan-voo @Koobah

I'd tried getting 3.5 to work in the past and it refused to acknowledge the prompt. Great idea asking it to parse it's final output as JSON.

If you want to submit a pull request that would be a huge help!:

  • - Add a model argument (If none is provided, default to gpt-4)
  • - Create different prompt.txt files for each model (this appears to be necessary), load the appropriate one when building the prompt.
  • - Add model to Config, set on start-up and get every time gpt-4 is currently mentioned.

GPT3.5 is so much cheaper, allowing for much more testing and development.

In future we could even get GPT4 instances of Auto-GPT to cheaply spin up entire sub-instances of Auto-GPT running GPT3.5 for multiple step tasks...

@Torantulino Torantulino added the good first issue Good for newcomers label Apr 1, 2023
@Koobah
Copy link

Koobah commented Apr 1, 2023

Guys, glad that it helped you. Just be very cautious when using my code. This is my very first programming attempt. I have no clue what I am doing :)

@Koobah
Copy link

Koobah commented Apr 1, 2023

Btw, the idea I am working on is to create specialist GPT instances project managers, marketers, operators), where the bot would have it's own complex prompt replicating what a person in such role would do

@Torantulino
Copy link
Member

Brilliant! Make sure you're up-to-date, as I'm adding features all the time.

Please share your journey with us over at Discussions, I'd love to see how things progress as you go.

@0xcha05
Copy link
Contributor

0xcha05 commented Apr 2, 2023

#19

@Torantulino Torantulino mentioned this issue Apr 2, 2023
@xSNYPSx
Copy link

xSNYPSx commented Apr 2, 2023

I'm playing with v3.5 thanks to @Koobah fork: https://github.com/Koobah/Auto-GPT It seems to be able to do some things: https://www.loom.com/share/9ddf4b806dfb4a948c7d728669ebac28

Got stopped by this error:

Traceback (most recent call last):
File "/Users/stas/dev/Auto-GPT/AutonomousAI/main.py", line 134, in
assistant_reply = chat.chat_with_ai(prompt, user_input, full_message_history, mem.permanent_memory, token_limit)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/stas/dev/Auto-GPT/AutonomousAI/chat.py", line 50, in chat_with_ai
response = openai.ChatCompletion.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/openai/api_resources/chat_completion.py", line 25, in create
return super().create(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/openai/api_requestor.py", line 226, in request
resp, got_stream = self._interpret_response(result, stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/openai/api_requestor.py", line 619, in _interpret_response
self._interpret_response_line(
File "/usr/local/lib/python3.11/site-packages/openai/api_requestor.py", line 679, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 4121 tokens. Please reduce the length of the messages.

I dont understand it, I download Koobah's fork, how to run it with gpt 3.5 key ? I used command git clone https://github.com/Koobah/Auto-GPT , but I dont find main.py in his repository

Update: I am roll it out. Just need to create keys.py (read readme) in autonomousAI path and run main.py !

@alreadydone
Copy link

alreadydone commented Apr 2, 2023

Note that another "autonomous agent" experiment by Yohei (which is more quite popular on Twitter, but not open source) has produced impressive demos using GPT3. Yohei has recently publicized the architecture of the system, and I think there are things to learn from it, e.g. using a task queue, and using a vector store for long-term memory rather than files. But that system doesn't implement code execution yet, let alone code improvement.

@jcp
Copy link
Contributor

jcp commented Apr 2, 2023

@Taytay and @0xcha05, an alternative solution involves PR #39. Rather than using command line arguments, you could introduce an environment variable that overrides the default model. This approach offers more flexibility, especially as the codebase becomes more modular.

@Taytay
Copy link
Contributor

Taytay commented Apr 2, 2023

Nice!

I think we should use both. I prefer dotenv, and was thrilled to see that PR, but was trying to keep my PR from fixing all the things at once. ;)

@0xcha05
Copy link
Contributor

0xcha05 commented Apr 2, 2023

I agree, @jcp. I think this decision makes my PRs not useful anymore. lessons learned, don't format on save(when contributing to OSS), make the PRs small, and address each issue separately.

@Torantulino
Copy link
Member

Absolutely agree with the "Make PR's small" part.
Big pull requests are actually slowing things down right now, there's a lot to get through.

@PurrsianMilkman
Copy link

i would love to see local model support soon because i now have my api setup and would love to use my own language model. openai gets expensive.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Good for newcomers
Projects
None yet
Development

Successfully merging a pull request may close this issue.

9 participants