-
Notifications
You must be signed in to change notification settings - Fork 43.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error parsing JSON response with literal_eval invalid syntax (<unknown>, line 1) JSON Validation Error: 'thoughts' is a required property #4752
Comments
same here Failed validating 'required' in schema: On instance: Failed validating 'required' in schema: On instance: NEXT ACTION: COMMAND = None ARGUMENTS = None |
fixed on my side with creating a fresh ai_settings.yaml |
hum, got it again a bit after in another request (master branch) |
Not sure what is causing this , I am using stable branch, would you mind sharing how you "fixed" it previously? Again sorry, fairly new to this |
I keep getting that error no matter what i do as well! Update: So seems like the work around for this issue is to use Pinecone or Redis as the (MEMORY_BACKEND=) If your using AutoGPT via Git or local install you will need to create accounts with the above mentioned or Use Docker setup which includes Redis setup. Personally just made an account with Pincone as docker setup was a headace as a novice to these things. hope it helps |
Much of this will be fixed when we move to OpenAI functions. In the mean
time we can try to figure out a solution.
…On Tue, Jun 20, 2023 at 9:05 AM ewave-design ***@***.***> wrote:
I keep getting that error no matter what i do as well!
—
Reply to this email directly, view it on GitHub
<#4752 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAABQXF2MVO4SNVC6DRU3Q3XMHC3XANCNFSM6AAAAAAZNAX7RE>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
I am also getting this error. I'm also not a developer and not familiar with what Redis or Pinecone are. |
How do you create a fresh ai_settings.yaml and what does that exactly do? |
FWIW, I too am experiencing this. Worked well for 20ish iterations before bombing out. This is my first time running it, so I'm not sure if it's a recent commit issue, or a bug that has existed for a while. |
same error
|
For what it's worth, I reverted from the latest version ( |
@Dids same reverting to v0.4.0 worked for me |
@Dids @kelteseth how did you guys revert the version and ran it? sorry very new to all this and tooling |
I got this fixed on ubuntu. ran the command "sudo chmod -R 777 autogpt/" while in the Auto-GPT folder. |
I actually changed i docker-compose.yml but then I got a new error: FAILED FILE VALIDATION The file I copied the file from the repo, still doesn't work ¯_(ツ)_/¯ |
Same here. macOsBigSur/Docker/GPT3.5/Auto-gpt 4.1 2023-06-23 20:38:34,651 ERROR logs:_log:143 Error parsing JSON response with literal_eval invalid syntax (, line 1) Failed validating 'required' in schema: On instance: |
looks like reverting should work |
Same issue as @lucasmocellin I was able to revert to 0.4.0, and then apply this hotfix: |
Not sure if this is the same but i am getting a lot of these as well "Error parsing JSON response with literal_eval '{' was never closed (, line 1)" |
Have the same issue. If anyone in the future find a way to solve it plz kick my ass. |
I'm also lining up here for ass-kicking like @QvQQ |
same issue after ti was working for a day or two. |
I hope it helps someone:
Hope it helps! 🥇 |
The problem is that gpt sometimes responds in markdown format and starts the response with "```json". To force responses in bare json, you can add the beginning of the response to the request. |
The problem is the OPENAI_FUNCTIONS setting. Turn OPENAI_FUNCTIONS=False and the SMART_LLM=gpt-4-0314 |
Sorry this is a bit long, but I am trying to give a comprehensive explanation and what I tried. I have the exact same error as mentioned at the top of this thread. I am getting an intermittent error as below. Further down, I’ve explained the different things I’ve tried but nothing seems to work. Any thoughts or ideas greatly appreciated. I am trying to follow the tutorial at https://lablab.ai/t/autogpt-tutorial-how-to-use-and-create-agent-for-coding-game (though I did not follow their setup instructions. That was because initially, from my previous setup (which had been working for a week or so with various experiments, albeit, all in manual mode), I was not having errors. But since I started getting issues, I’ve tried all sorts of setup routes but everything is breaking with the above parsing error, usually after one or two attempts at the prompt mentioned below. Here is the prompt, initial results and the error: So at the prompt I enter: AI Agent for coding And then I get the following output;
Since encountering this, I’ve wiped out my install and tried a few things, as below. I have an openai API paid plan and key. First attempt: Set up with Docker:
Create and save the following into docker-compose.yml
Save a .env file into the same location with the following changes from the .env.template
Note, I’ve tried this with JSON file default and got similar issues.
then
So I think that’s ok.
Just gives Then do
And that will seem to startup:
So at the prompt I enter: AI Agent for coding Then I get a long error above. I then try changing the .env by commenting out all the REDIS related lines. Running it multiple times, I found the error is intermittent. So then wiped all that and started through the git route from the agpt docs:
Gives the error
Tried a few things to get to the stable branch but probably did it wrong so wanted to start fresh with what I thought more likely to work;
Seemed to work. docker compose build auto-gpt
Then I intermittently get the same error. As per someone’s suggestion above, I tried: Any ideas greatly appreciated. |
Tried many of the same steps and suggestions, ended up here as well. For the amount of people using AutoGPT successfully and the amount encountering the same error there is clearly a missing link somewhere. I can't event get --help to successfully load all the way. |
when following exactly what you said I am able to run on LM_STUDIO but still have a little problem Using memory of type: RedisMemory |
Yo man, I am trying localai too now |
I cannot Get this to work- same thing. here is an exsample --> All packages are installed.
Failed validating 'required' in schema: On instance: Failed validating 'required' in schema: On instance: |
I figured This Out --> Failed validating 'required' in schema: On instance: If you are using a local server or whatever your server settings for your model is. Sure that in the Prompt formatting it is not inserting new lines. It worked for me give it a try! |
Seems to help |
Maybe this is working because this is realated to : |
This issue has automatically been marked as stale because it has not had any activity in the last 50 days. You can unstale it by commenting or removing the label. Otherwise, this issue will be closed in 10 days. |
This issue was closed automatically because it has been stale for 10 days with no activity. |
Which Operating System are you using?
Other
Which version of Auto-GPT are you using?
Latest Release
Do you use OpenAI GPT-3 or GPT-4?
GPT-3.5
Which area covers your issue best?
Installation and setup
Describe your issue.
Hi All !
I have been playing around with AutoGPT for a few days now, and it had been running quite well until yesterday afternoon. I am very much unsure what change but this is the error i keep getting now :
Not sure why , as It started fairly randomly. I am no developper whatsoever , so if anyone could help, I would very much appreciate it !!
thanks
Upload Activity Log Content
No response
Upload Error Log Content
No response
The text was updated successfully, but these errors were encountered: