You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello gpt-pilot team :)
Firstly to introduce myself, I'm an systems, network, IP, infrastructure engineer with limited coding skills. I decided to try this out today during my Sunday relaxing day :) I really love the videos I've watched where this is working and I really love the idea and think it's absolutely fantastic. A lot of great work has gone into this!
I noticed a small problem that I wanted to feed back to the community here. I managed to get this working with another API that is not actually openai however the response from the API is a 400 code with message: API responded with status code: 400. Request token size: 1964 tokens. Response text: {"error":{"message":"After the (optional) system message(s), user and assistant roles should be alternating.","type":"invalid_message","code":400}}
When I checked the log file I can see the API calls do the following things:
sends the 'system' prompt as the message defined
sends the 'user' prompt as the prompt template under 'prompts/'
then finally sends another 'user' prompt which is the beginning of my own writing as the app description.
This causes the error 400 response because the system and user prompts are not alternating.
When I was trying out autogen, I noticed they have a combination of system, 'agent' and user prompts there.
I took a look at AgentConvo.py and utils.py but, I would need to spend much more time reversing the code to be able to figure out how to either ensure the roles are alternating or add the 'agent' prompt.
I was able to successfully create a web app where I implemented this to create a full chat web interface for any backend LLM including the online ones that require this to be alternating and the way I achieved it there was to cache the response from the API, then send that back with my user prompt appended. Though in this case, we have two user prompts.
I did also try with my local LLM_Studio but when I run the main.py, llm studio logs a message " [ERROR] Unexpected endpoint or method. (POST /v1). Returning 200 anyway" (lol) so unfortunately I couldnt get that working.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hello gpt-pilot team :)
Firstly to introduce myself, I'm an systems, network, IP, infrastructure engineer with limited coding skills. I decided to try this out today during my Sunday relaxing day :) I really love the videos I've watched where this is working and I really love the idea and think it's absolutely fantastic. A lot of great work has gone into this!
I noticed a small problem that I wanted to feed back to the community here. I managed to get this working with another API that is not actually openai however the response from the API is a 400 code with message:
API responded with status code: 400. Request token size: 1964 tokens. Response text: {"error":{"message":"After the (optional) system message(s), user and assistant roles should be alternating.","type":"invalid_message","code":400}}
When I checked the log file I can see the API calls do the following things:
This causes the error 400 response because the system and user prompts are not alternating.
When I was trying out autogen, I noticed they have a combination of system, 'agent' and user prompts there.
I took a look at AgentConvo.py and utils.py but, I would need to spend much more time reversing the code to be able to figure out how to either ensure the roles are alternating or add the 'agent' prompt.
I was able to successfully create a web app where I implemented this to create a full chat web interface for any backend LLM including the online ones that require this to be alternating and the way I achieved it there was to cache the response from the API, then send that back with my user prompt appended. Though in this case, we have two user prompts.
I did also try with my local LLM_Studio but when I run the main.py, llm studio logs a message " [ERROR] Unexpected endpoint or method. (POST /v1). Returning 200 anyway" (lol) so unfortunately I couldnt get that working.
Beta Was this translation helpful? Give feedback.
All reactions