-
Notifications
You must be signed in to change notification settings - Fork 44.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Makes it possible to use gpt-3.5 #45
Conversation
I made the json parsing more forgivable. I improved the prompt, using things I learned from: Koobah/Auto-GPT
Tried this, however
|
Woah - lemme see what I missed... |
Apparently I was moving way too fast. I missed some pretty basic stuff from my commit, and am in the process of fixing it now. (Not at my desk though, so taking me a bit) |
This adds fix_and_parse_json Also, add requirements-alternative.txt to help install reqs in a different environment
Okay - pushed up fixes! Lemme know how that works for you. |
Seems to work! Need to add my key May want to upgrade requirements
|
Great work, taking a look now. |
@zorrobyte : Hmmm - I added pyyaml to requirements.txt: |
This is amazing! |
I'm fixing merge conflicts now |
For some reason the bot keeps prefacing its JSON. This fixes it for now.
Okay - pushed some fixes. It looks like the bot is inserting its thoughts before the JSON response. I have some ideas to fix, but in the meantime, the JSON parser is doing its job. I think we need to give it a better example of the response we want. In the meantime, we are seeing stuff like this:
|
@anslex : Yeah, if you aren't in the scripts folder when you run it, it fails. We'll get that fixed up soon I'm sure. In the meantime: cd ./scripts
python ./main.py speak-mode |
Great work @Taytay ! |
@Taytay Eager to merge this, just checking you didn't miss my review comments above! 😊 |
Which ones actually? I don't see them |
Guys, please make some kind of quick switch between models between 3.5 and 4, so that there is one repository, but for example some model_config file where we select once and everything else is already contained in the general repository. |
It should be configured in the |
scripts/ai_functions.py
Outdated
|
||
# This is a magic function that can do anything with no-code. See | ||
# https://github.com/Torantulino/AI-Functions for more info. | ||
def call_ai_function(function, args, description, model="gpt-4"): | ||
def call_ai_function(function, args, description, model=cfg.smart_llm_model): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have you found AI_Functions to work on gpt.3.5-turbo?
This is the results of my testing:
Description | GPT-4 Result | GPT-3.5-turbo Result | Reason |
---|---|---|---|
Generate fake people | PASSED | FAILED | Incorrect response format |
Generate Random Password | FAILED | FAILED | Incorrect response format |
Calculate area of triangle | FAILED | FAILED | Incorrect float value (GPT-4), Incorrect response format (GPT-3.5-turbo) |
Calculate the nth prime number | PASSED | PASSED | N/A |
Encrypt text | PASSED | PASSED | N/A |
Find missing numbers | PASSED | PASSED | N/A |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point - let's get these added to the tests and see if we can get them working with some prompt finagling?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you like! Submit a pull request over at AI Functions.
For now let's make sure we only use 3.5 for AI functions such as json parsing, using your smart/fast model property.
Ah sorry @Taytay, new to this, they weren't "sent". |
Multiple fixes inbound.
|
Now uses tokens and biggest context possible.
This makes gpt3.5 turbo fully possible! 🚀
what do we need to do to fix the JSON Error? I am still getting it. Do I need to do anything different than what's in the readme? If so, can we update readme to include details? Or you can just tell me here and Ill try to push a new cmomit (this will be my first commit on github ever :) @Torantulino |
…hain ⚡ Run Agents on vercel/edge
how solve this?? |
@Torantulino, any idea when we will see this on Stable (the docker image in special)? |
Sorry @caffo, I'm not sure what it is you're reffering to, can you elaborate? |
@Torantulino Sorry, I think I got confused. This is already on the stable docker image as the flag --gpt3-only, right? |
Ah I see, yes this issue was actually to make GPT3 possible to use at all in the first place, because at the start of this project it only worked with gpt4. You are correct that gpt3only is available to use, and the current default is a mix where gpt3 is used the vast majority of the time. |
(This is such a neat experiment. Thanks for open-sourcing!)
Fixes #12
Fixes #40
Makes davinci-3.5-turbo the default model (faster and cheaper)
Allows for model configuration
I made JSON parsing a lot more forgiving, including using GPT to fix up the JSON if necessary.
I also improved the prompt to make it follow instructions a bit better.
I also refactored a bit.
I also made it save my inputs to a yml file by default when it runs. (This could be a LOT better - very rough, but wanted to get this in)
P.S. Mind adding an MIT license to this?