Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Makes it possible to use gpt-3.5 #45

Merged
merged 33 commits into from
Apr 3, 2023

Conversation

Taytay
Copy link
Contributor

@Taytay Taytay commented Apr 2, 2023

(This is such a neat experiment. Thanks for open-sourcing!)

Fixes #12
Fixes #40

Makes davinci-3.5-turbo the default model (faster and cheaper)
Allows for model configuration

I made JSON parsing a lot more forgiving, including using GPT to fix up the JSON if necessary.
I also improved the prompt to make it follow instructions a bit better.
I also refactored a bit.
I also made it save my inputs to a yml file by default when it runs. (This could be a LOT better - very rough, but wanted to get this in)

P.S. Mind adding an MIT license to this?

I made the json parsing more forgivable. I improved the prompt, using things I learned from: Koobah/Auto-GPT
@zorrobyte
Copy link

Tried this, however

playsound is relying on a python 2 subprocess. Please use `pip3 install PyObjC` if you want playsound to run more efficiently.
Traceback (most recent call last):
  File "/Users/rossfisher/Auto-GPT/scripts/main.py", line 3, in <module>
    import commands as cmd
  File "/Users/rossfisher/Auto-GPT/scripts/commands.py", line 8, in <module>
    import ai_functions as ai
  File "/Users/rossfisher/Auto-GPT/scripts/ai_functions.py", line 11, in <module>
    def call_ai_function(function, args, description, model=cfg.smart_llm_model):
                                                            ^^^^^^^^^^^^^^^^^^^
AttributeError: 'Config' object has no attribute 'smart_llm_model'```

@Taytay
Copy link
Contributor Author

Taytay commented Apr 2, 2023

Woah - lemme see what I missed...

@Taytay
Copy link
Contributor Author

Taytay commented Apr 2, 2023

Apparently I was moving way too fast. I missed some pretty basic stuff from my commit, and am in the process of fixing it now. (Not at my desk though, so taking me a bit)

This adds fix_and_parse_json
Also, add requirements-alternative.txt to help install reqs in a different environment
@Taytay
Copy link
Contributor Author

Taytay commented Apr 2, 2023

Okay - pushed up fixes! Lemme know how that works for you.

@zorrobyte
Copy link

Seems to work! Need to add my key

May want to upgrade requirements

  File "/Users/rossfisher/Auto-GPT/scripts/main.py", line 15, in <module>
    from ai_config import AIConfig
  File "/Users/rossfisher/Auto-GPT/scripts/ai_config.py", line 1, in <module>
    import yaml
ModuleNotFoundError: No module named 'yaml'
rossfisher@Rosss-MBP scripts % pip3 install yaml
ERROR: Could not find a version that satisfies the requirement yaml (from versions: none)
ERROR: No matching distribution found for yaml
rossfisher@Rosss-MBP scripts % pip3 install pyyaml
Collecting pyyaml
  Using cached PyYAML-6.0-cp311-cp311-macosx_11_0_arm64.whl (167 kB)
Installing collected packages: pyyaml
Successfully installed pyyaml-6.0```

@Torantulino
Copy link
Member

Great work, taking a look now.

@Taytay
Copy link
Contributor Author

Taytay commented Apr 2, 2023

@zorrobyte : Hmmm - I added pyyaml to requirements.txt:
pip install -r ./requirements.txt should have picked it up...

@Torantulino
Copy link
Member

This is amazing!
So much faster and cheaper, excellent work!

@Taytay
Copy link
Contributor Author

Taytay commented Apr 2, 2023

I'm fixing merge conflicts now

@Taytay
Copy link
Contributor Author

Taytay commented Apr 2, 2023

Okay - pushed some fixes. It looks like the bot is inserting its thoughts before the JSON response. I have some ideas to fix, but in the meantime, the JSON parser is doing its job.

I think we need to give it a better example of the response we want. In the meantime, we are seeing stuff like this:

assistant reply: As the first step, I need to clone the repository locally so that I can analyze and test the existing code. I will go with the following command:

{
    "command": {
        "name": "execute_python_file",
<snip>...

@Taytay
Copy link
Contributor Author

Taytay commented Apr 2, 2023

@anslex : Yeah, if you aren't in the scripts folder when you run it, it fails. We'll get that fixed up soon I'm sure.

In the meantime:

cd ./scripts
python ./main.py speak-mode

@0xcha05
Copy link
Contributor

0xcha05 commented Apr 3, 2023

Great work @Taytay !

@Torantulino
Copy link
Member

@Taytay Eager to merge this, just checking you didn't miss my review comments above! 😊

@Taytay
Copy link
Contributor Author

Taytay commented Apr 3, 2023

Which ones actually? I don't see them

@xSNYPSx
Copy link

xSNYPSx commented Apr 3, 2023

Guys, please make some kind of quick switch between models between 3.5 and 4, so that there is one repository, but for example some model_config file where we select once and everything else is already contained in the general repository.

@yousefissa
Copy link
Contributor

It should be configured in the .env file


# This is a magic function that can do anything with no-code. See
# https://github.com/Torantulino/AI-Functions for more info.
def call_ai_function(function, args, description, model="gpt-4"):
def call_ai_function(function, args, description, model=cfg.smart_llm_model):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have you found AI_Functions to work on gpt.3.5-turbo?

This is the results of my testing:

Description GPT-4 Result GPT-3.5-turbo Result Reason
Generate fake people PASSED FAILED Incorrect response format
Generate Random Password FAILED FAILED Incorrect response format
Calculate area of triangle FAILED FAILED Incorrect float value (GPT-4), Incorrect response format (GPT-3.5-turbo)
Calculate the nth prime number PASSED PASSED N/A
Encrypt text PASSED PASSED N/A
Find missing numbers PASSED PASSED N/A

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point - let's get these added to the tests and see if we can get them working with some prompt finagling?

Copy link
Member

@Torantulino Torantulino Apr 3, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you like! Submit a pull request over at AI Functions.
For now let's make sure we only use 3.5 for AI functions such as json parsing, using your smart/fast model property.

scripts/chat.py Outdated Show resolved Hide resolved
scripts/config.py Outdated Show resolved Hide resolved
@Torantulino
Copy link
Member

Ah sorry @Taytay, new to this, they weren't "sent".

@Torantulino
Copy link
Member

Torantulino commented Apr 3, 2023

Multiple fixes inbound.

  • Maximum context length reached Error
  • Broken Google Search Command
  • Vastly Improved JSON parsing for gpt3.5

@Torantulino Torantulino merged commit ea91201 into Significant-Gravitas:master Apr 3, 2023
@SirGunthix
Copy link

SirGunthix commented Apr 9, 2023

what do we need to do to fix the JSON Error? I am still getting it. Do I need to do anything different than what's in the readme? If so, can we update readme to include details? Or you can just tell me here and Ill try to push a new cmomit (this will be my first commit on github ever :) @Torantulino

tgonzales pushed a commit to tgonzales/Auto-GPT that referenced this pull request Apr 19, 2023
@Apollyon81
Copy link

@anslex: Sim, se você não estiver na pasta de scripts ao executá-lo, ele falhará. Vamos consertar isso em breve, tenho certeza.

Enquanto isso:

cd ./scripts
python ./main.py speak-mode

how solve this??
python ./main.py speak-mode
python: can't open file '/workspaces/Auto-GPT-0.2.2/scripts/./main.py': [Errno 2] No such file or directory

@caffo
Copy link

caffo commented May 6, 2023

@Torantulino, any idea when we will see this on Stable (the docker image in special)?

@Torantulino
Copy link
Member

Sorry @caffo, I'm not sure what it is you're reffering to, can you elaborate?

@caffo
Copy link

caffo commented May 7, 2023

@Torantulino Sorry, I think I got confused. This is already on the stable docker image as the flag --gpt3-only, right?

@Torantulino
Copy link
Member

Ah I see, yes this issue was actually to make GPT3 possible to use at all in the first place, because at the start of this project it only worked with gpt4.

You are correct that gpt3only is available to use, and the current default is a mix where gpt3 is used the vast majority of the time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request high priority
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Error: Invalid JSON The model: gpt-4 does not exist