Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[2024-02-02 20:12:37.871] [ERROR] [Server Error] {"title":"'messages' array must only contain objects with a 'content' field that is not empty"}[Bug]: #1521

Open
dipteshbosedxc opened this issue Feb 3, 2024 · 23 comments
Assignees
Labels
0.2 Issues which were filed before re-arch to 0.4 bug Something isn't working proj-studio Related to AutoGen Studio.

Comments

@dipteshbosedxc
Copy link

Describe the bug

Autogen studio when using LM Studio Local LLM throws this error when trying to respond through the AutoGen UI

[2024-02-02 20:12:37.871] [ERROR] [Server Error] {"title":"'messages' array must only contain objects with a 'content' field that is not empty"}

Steps to reproduce

  1. In Autogen playground, the prompt I use is: Provide 2 research paper on Quantum vibrations. Just the link will suffice
  2. LM Studio Generates the content
  3. However, Autogen Studio is not able to display the output message. It throws the error message instead.

Expected Behavior

The JSON output should be displayed in the Autogen playground window.

Screenshots and logs

image

image

Additional Information

Autogen Studio Version: 0.0.42a
OS: Windows 11
Python: 3.11
Related Issues: None
JSON Response from LM Studio (Error Message is at the end of the text):

[2024-02-02 20:12:37.858] [INFO] [LM STUDIO SERVER] Processing queued request...
[2024-02-02 20:12:37.865] [INFO] Received POST request to /v1/chat/completions with body: {
"messages": [
{
"content": "You are a helpful AI assistant.\nSolve tasks using your coding and language skills.\nIn the following cases, suggest python code (in a python coding block) or shell script (in a sh coding block) for the user to execute.\n 1. When you need to collect info, use the code to output the info you need, for example, browse or search the web, download/read a file, print the content of a webpage or a file, get the current date/time, check the operating system. After sufficient info is printed and the task is ready to be solved based on your language skill, you can solve the task by yourself.\n 2. When you need to perform some task with code, use the code to perform the task and output the result. Finish the task smartly.\nSolve the task step by step if you need to. If a plan is not provided, explain your plan first. Be clear which step uses code, and which step uses your language skill.\nWhen using code, you must indicate the script type in the code block. The user cannot provide any other feedback or perform any other action beyond executing the code you suggest. The user can't modify your code. So do not suggest incomplete code which requires users to modify. Don't use a code block if it's not intended to be executed by the user.\nIf you want the user to save the code in a file before executing it, put # filename: inside the code block as the first line. Don't include multiple code blocks in one response. Do not ask users to copy and paste the result. Instead, use 'print' function for the output when relevant. Check the execution result returned by the user.\nIf the result indicates there is an error, fix the error and output the code again. Suggest the full code instead of partial code or code changes. If the error can't be fixed or if the task is not solved even after the code is executed successfully, analyze the problem, revisit your assumption, collect additional info you need, and think of a different approach to try.\nWhen you find an answer, verify the answer carefully. Include verifiable evidence in your response if possible.\nReply "TERMINATE" in the end when everything is done.\n \n\nYou are a helpful assistant that can use available functions when needed to solve problems. At each point, do your best to determine if the user's request has been addressed. IF THE REQUEST HAS NOT BEEN ADDRESSED, RESPOND WITH CODE TO ADDRESS IT. IF A FAILURE OCCURRED (e.g., due to a missing library) AND SOME ADDITIONAL CODE WAS WRITTEN (e.g. code to install the library), ENSURE THAT THE ORIGINAL CODE TO ADDRESS THE TASK STILL GETS EXECUTED. If the request HAS been addressed, respond with a summary of the result. The summary must be written as a coherent helpful response to the user request e.g. 'Sure, here is result to your request ' or 'The tallest mountain in Africa is ..' etc. The summary MUST end with the word TERMINATE. If the user request is pleasantry or greeting, you should respond with a pleasantry or greeting and TERMINATE.\n\n\n\nWhile solving the task you may use functions below which will be available in a file called skills.py .\nTo use a function skill.py in code, IMPORT THE FUNCTION FROM skills.py and then use the function.\nIf you need to install python packages, write shell code to\ninstall via pip and use --quiet option.\n\n \n\n##### Begin of find_papers_arxiv #####\n\nimport os\nimport re\nimport json\nimport hashlib\n\n\ndef search_arxiv(query, max_results=10):\n """\n Searches arXiv for the given query using the arXiv API, then returns the search results. This is a helper function. In most cases, callers will want to use 'find_relevant_papers( query, max_results )' instead.\n\n Args:\n query (str): The search query.\n max_results (int, optional): The maximum number of search results to return. Defaults to 10.\n\n Returns:\n jresults (list): A list of dictionaries. Each dictionary contains fields such as 'title', 'authors', 'summary', and 'pdf_url'\n\n Example:\n >>> results = search_arxiv("attention is all you need")\n >>> print(results)\n """\n\n import arxiv\n\n key = hashlib.md5(("search_arxiv(" + str(max_results) + ")" + query).encode("utf-8")).hexdigest()\n # Create the cache if it doesn't exist\n cache_dir = ".cache"\n if not os.path.isdir(cache_dir):\n os.mkdir(cache_dir)\n\n fname = os.path.join(cache_dir, key + ".cache")\n\n # Cache hit\n if os.path.isfile(fname):\n fh = open(fname, "r", encoding="utf-8")\n data = json.loads(fh.read())\n fh.close()\n return data\n\n # Normalize the query, removing operator keywords\n query = re.sub(r"[^\\s\\w]", " ", query.lower())\n query = re.sub(r"\s(and|or|not)\s", " ", " " + query + " ")\n query = re.sub(r"[^\\s\\w]", " ", query.lower())\n query = re.sub(r"\s+", " ", query).strip()\n\n search = arxiv.Search(query=query, max_results=max_results, sort_by=arxiv.SortCriterion.Relevance)\n\n jresults = list()\n for result in search.results():\n r = dict()\n r["entry_id"] = result.entry_id\n r["updated"] = str(result.updated)\n r["published"] = str(result.published)\n r["title"] = result.title\n r["authors"] = [str(a) for a in result.authors]\n r["summary"] = result.summary\n r["comment"] = result.comment\n r["journal_ref"] = result.journal_ref\n r["doi"] = result.doi\n r["primary_category"] = result.primary_category\n r["categories"] = result.categories\n r["links"] = [str(link) for link in result.links]\n r["pdf_url"] = result.pdf_url\n jresults.append(r)\n\n if len(jresults) > max_results:\n jresults = jresults[0:max_results]\n\n # Save to cache\n fh = open(fname, "w")\n fh.write(json.dumps(jresults))\n fh.close()\n return jresults\n\n\n#### End of find_papers_arxiv ####\n\n ",
"role": "system"
},
{
"content": "Provide 2 research paper on Quantum vibrations. Just the link will suffice",
"role": "user"
},
{
"content": "\n1. "Quantum Vibrations in Nanostructures: A Review" by J.A.R. Braga, et al. - This review article provides an overview of quantum vibrations in nanostructures, including their properties, applications, and future directions. It covers topics such as plasmonics, photonics, and acoustics, and discusses the role of quantum vibrations in controlling and manipulating light and sound at the nanoscale.\n\n2. "Quantum Vibrations in Nanostructures: A Theoretical Perspective" by S.K. Singh, et al. - This article provides a theoretical perspective on quantum vibrations in nanostructures, including their properties, dynamics, and applications. It covers topics such as the quantum nature of vibrations, the role of confinement in controlling vibrational modes, and the use of quantum vibrations for sensing and communication.\n\nBoth papers provide valuable insights into the field of quantum vibrations in nanostructures and can be a good starting point for further research on this topic. \n\n",
"role": "assistant"
},
{
"content": "",
"role": "user"
}
],
"model": "llama-2-7b-32k-instruct",
"stream": false,
"temperature": 0
}
[2024-02-02 20:12:37.871] [ERROR] [Server Error] {"title":"'messages' array must only contain objects with a 'content' field that is not empty"}

@dipteshbosedxc dipteshbosedxc added the bug Something isn't working label Feb 3, 2024
@SDShooter
Copy link

For some reason on the second post, it's passing a second blank user content object:

{
"content": "",
"role": "user"
}

I'm having the same issue trying to use AutoGen studio with a local LLM.

@shashank-indukuri
Copy link

Any solution found for the above problem? Thanks in advance.

@ChildOf7Sins
Copy link

ChildOf7Sins commented Feb 3, 2024

Bump^ but mine is slightly different. I am using a different model.

openai.BadRequestError: Error code: 400 - {'error': "'messages' array must only contain objects with a 'content' field that is not empty."}

@victordibia victordibia added the proj-studio Related to AutoGen Studio. label Feb 4, 2024
@victordibia
Copy link
Collaborator

Hi @dipteshbosedxc @shashank-indukuri , @SDShooter

It looks like the error is due to the fact that lmstudio or the api server you have does not allow messages with empty content.

What Causes Empty Messages in AutoGen studio

In the default workflow for autogen, where a userproxy and an assisstant solve a task, the userproxy typically only plays the role of executing any code generated by the assistant and is not configured with an llm. When no code is generated, its response empty.

What Can Be Done

From the autogenstudio point of view there are two things we can do.

  • Add a model to the userproxy. This means that the userproxy will now use the LLM to generate a response when it receives a message, in addition also attempting to execute code. You can do this in autogenstudio by

    • Workflow -> General Workflow -> sender -> models -> add ... and save
    • Playground -> new session -> select General Workflow -> make request
  • set a default response default_auto_reply value. AutoGen gents have a field that specifies default auto reply when no code execution or llm-based reply is generated.

Currently, this field is not surfaced in the AutoGen studio UI and is something I can implement and provide an update here.

Let me know what you find from trying the first method.

@SDShooter
Copy link

Thanks, that is exactly what is going on. From a time perspective, being able to set a default response would be ideal, as making extra calls can take a while.

Meanwhile, I've isolated the problem down to the llm not including the word TERMINATE at the end of the generated content, which triggers the full retry count and the empty content element that lmstudio fails on.

Depending on the local model being used, I found that limiting the number of available skills to one allowed the 7b mistral instruct quantize level 6 model to properly follow this instruction from the default system message.

@gee666
Copy link

gee666 commented Feb 4, 2024

I have the same issue thank you for the response
also it would be helpful to have the temperature and max tokens settings available in the studio

@SDShooter
Copy link

Any solution found for the above problem? Thanks in advance.

See my comment below for a possible workaround. Basically limiting the agent to one skill can help smaller, less capable models follow the system prompt to end with TERMINATE.

@dipteshbosedxc
Copy link
Author

@victordibia : The first recommendation did not work. I get the same error even after assigning a model to userproxy. Note: This exact setup was working fine in the previous version of Autogen Studio. After I upgraded to the newer version of Autogen Studio.

@SDShooter
Copy link

@victordibia thanks for the fix! Should help all of us using lmstudio when it's merged.

github-merge-queue bot pushed a commit that referenced this issue Feb 6, 2024
…indows Testing ] (#1475)

* support groupchat, other QOL fixes

* remove gallery success toast

* Fix #1328. Add CSVLoader component and related support for rendering CSV files. Add download link in the modal for appropriate file types including CSV, Code, and PDF.

* add name and description field to session datamodel

* Update website/blog/2023-12-01-AutoGenStudio/index.mdx

Co-authored-by: Chi Wang <wang.chi@microsoft.com>

* sanitize llmconfig, remove additional fields

* improve models UX, only modify models from model tab.

* readme updates

* improve db defaults

* improve ui hover behavior and add note on models

* general qol updats

* add support for returning summary_method

* use ant design tables

* icon and layout updates

* css and layout updates

* readme updates and QOL updates

* fix bug where empty string is used as apikey #1415

* add speaker selection to UI #1373

* Fixed a bug that localAgent updates were not synchronized between GroupChatFlowSpecView and AgentFlowSpecView.

* Fixed a bug in Agent Specification Modal that caused localAgent updates to remain in state when closing a modal other than onOk.

* Fixed a bug that the updated Agent Specification Modal was not saved when the content of FlowConfigViewer Modal was changed after the Agent Specification Modal was updated when an updatedFlowConfig was created using localFlowConfig.

* add version to package

* remove sample key

* early support for versions table and testing models

* Add support for testing model when created #1404

* remove unused imports, qol updates

* fix bug on workflowmanager

* make file_name optional in skills datamodel

* update instructions on models

* fix errors from merge conflict with main

* santize workflow before download

* add support for editing skills in full fledged editor (monaco) #1442

* fix merge artifacts

* Fix build command for windows

Replaced && to & to continue execution when the 'ui' folder doesn't exist and also suppressed error "The system cannot find the file specified."

* Fix setup instructions

The config file starts with a dot (according to gatsby-config.ts).

* Throw error if env file doesn't exist

Otherwise the app will not work (issue very hard to trace)

* version bump

* formattin gupdates

* formatting updates

* Show warning instead of error if env config file doesn't exist

Fix: #1475 (comment)

* add rel noopener to a tags

* formating updates

* remove double section in readme.

* update dev readme

* format update

* add default autoreply to agent config datamodel

* add check for empty messages list

* improve groupchat behavior, add sender to list of agents

* update system_message defaults to fit autogen default system message #1474

* simplify response from test_model to only return content, fix serialization issue in #1404

* readme and other formatting updates

* add support for showing temp and default auto reply #1521

* formatting updates

* formating and other updates

---------

Co-authored-by: Paul Retherford <paul@scanpower.com>
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
Co-authored-by: junkei_okinawa <ceazy.x2.okinawan@gmail.com>
Co-authored-by: Christopher Pereira <kripper@imatronix.com>
@dipteshbosedxc
Copy link
Author

Continue to get the same error with LM Studio v0.2.14 (latest version). Is there a newer Autogen Studio that I need to install?

@sunilkhadka139
Copy link

i got the same error. Did anyone solve the issue?

@suepradun
Copy link

i got the same error. Did anyone solve the issue?

user = autogen.UserProxyAgent(
name="User",
human_input_mode="NEVER",
is_termination_msg=lambda x: x.get("content", "") and x.get("content", "").rstrip().endswith("TERMINATE"),
default_auto_reply="Reply 'TERMINATE' in the end when everything is done. ",
max_consecutive_auto_reply=5,
code_execution_config={
"work_dir": "tasks",
"use_docker": False,
}, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.
)

@sunilkhadka139
Copy link

i got the same error. Did anyone solve the issue?

user = autogen.UserProxyAgent( name="User", human_input_mode="NEVER", is_termination_msg=lambda x: x.get("content", "") and x.get("content", "").rstrip().endswith("TERMINATE"), default_auto_reply="Reply 'TERMINATE' in the end when everything is done. ", max_consecutive_auto_reply=5, code_execution_config={ "work_dir": "tasks", "use_docker": False, }, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly. )

Still it's not working.

@victordibia
Copy link
Collaborator

user = autogen.UserProxyAgent(
name="User",
human_input_mode="NEVER",
is_termination_msg=lambda x: x.get("content", "") and x.get("content", "").rstrip().endswith("TERMINATE"),
default_auto_reply="Reply 'TERMINATE' in the end when everything is done. ",
max_consecutive_auto_reply=5,
code_execution_config={
"work_dir": "tasks",
"use_docker": False,
}, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.
)

In the example above, the default_auto_reply should be TERMINATE, not "Reply 'TERMINATE' in the end when everything is done. ". Next you should set the assistant is_termination_msg funtion to be lambda x: x.get("content", "") and x.get("content", "").rstrip().endswith("TERMINATE"). What this does is that

  • whenever the userproxy gets a message that has no code, it responds with "TERMINATE" to the assistant.
  • the assistant receives the message "TERMINATE" and based on its is_termination_msg, it ends the converation.

@sunilkhadka139
Copy link

This is my code. In my case, when the assistant provides reply but then user proxy cannot initiate the chat.

create an AssistantAgent named "assistant"

assistant = autogen.AssistantAgent(
name="assistant",
llm_config={
"cache_seed": 42, # seed for caching and reproducibility
"config_list": config_list, # a list of OpenAI API configurations
"temperature": 0, # temperature for sampling
}, # configuration for autogen's enhanced inference API which is compatible with OpenAI API
)

create a UserProxyAgent instance named "user_proxy"

user_proxy = autogen.UserProxyAgent(
name="user_proxy",
human_input_mode="NEVER",
max_consecutive_auto_reply=5,
is_termination_msg=lambda x: x.get("content", "").rstrip().endswith("TERMINATE"),
code_execution_config={
"work_dir": "coding",
"use_docker": False, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.
},
)

the assistant receives a message from the user_proxy, which contains the task description

user_proxy.initiate_chat(
assistant,
message="""

This is in jupyter notebook , I got an error in jupyter notebook as the screenshot attached. Why is userproxy not able to chat with assistant?

error

Please help me solve this query.

@mister-meagle
Copy link

Seeing this with current autogen and LMstudio 0.2.16. Setting default_auto_reply doesn't seem to help anything, it always gets an empty content field from the user role. Does anybody have a working solution?

[2024-03-13 12:38:47.113] [INFO] Received POST request to /v1/chat/completions with body: { "messages": [ { "content": "You are a helpful assistant.", "role": "system" }, { "content": "", "role": "user" } ],

@amitagh
Copy link

amitagh commented Mar 14, 2024

I too tried with latest lm studio 0.2.16 and get this error consistently. Tried with both LLama and Gemma model so looks to be model independant.

@new4u
Copy link

new4u commented Mar 16, 2024

My LLM chat from autogen studio seems words by words
and Im encounter the same issue as terminal

[2024-03-16 13:38:14.615] [INFO] [LM STUDIO SERVER] Context Overflow Policy is: Rolling Window [2024-03-16 13:38:14.615] [INFO] [LM STUDIO SERVER] Last message: { role: 'user', content: 'Let me' } (total messages = 19) [2024-03-16 13:38:38.939] [INFO] [LM STUDIO SERVER] Accumulating tokens ... (stream = false) [2024-03-16 13:38:38.940] [INFO] Accumulated 1 tokens: Sure [2024-03-16 13:38:39.062] [INFO] Accumulated 2 tokens: Sure! [2024-03-16 13:38:39.184] [INFO] [LM STUDIO SERVER] Generated prediction: { "id": "chatcmpl-emak0kp1bmfzag3qc4ou7b", "object": "chat.completion", "created": 1710567494, "model": "/Users/ac/.cache/lm-studio/models/TheBloke/zephyr-7B-alpha-GGUF/zephyr-7b-alpha.Q8_0.gguf", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "Sure!" }, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 2, "completion_tokens": 2, "total_tokens": 4 } } [2024-03-16 13:38:39.231] [INFO] [LM STUDIO SERVER] Processing queued request... [2024-03-16 13:38:39.232] [INFO] Received POST request to /v1/chat/completions with body: { "messages": [ { "content": "You are a helpful assistant.", "role": "system" }, { "content": "Write a python script to plot a sine wave and save it to disc as a png file sine_wave.png", "role": "assistant" }, { "content": "Write a python script to plot a sine wave and save it to disc as a png file sine_wave.png", "role": "assistant" }, { "content": "\n\n", "role": "user" }, { "content": "Write a python script to plot a sine wave and save it to disc as a png file sine_wave.png", "role": "assistant" }, { "content": "Here'", "role": "user" }, { "content": "Sure!", "role": "assistant" }, { "content": "Here'", "role": "user" }, { "content": "I don", "role": "assistant" }, { "content": "Here'", "role": "user" }, { "content": "Let me", "role": "assistant" }, { "content": "Sure!", "role": "user" }, { "content": "I can", "role": "assistant" }, { "content": "Here'", "role": "user" }, { "content": "Let me", "role": "assistant" }, { "content": "Sure!", "role": "user" }, { "content": "I can", "role": "assistant" }, { "content": "Here'", "role": "user" }, { "content": "Let me", "role": "assistant" }, { "content": "Sure!", "role": "user" } ], "model": "TheBloke/zephyr-7B-alpha-AWQ", "max_tokens": null, "stream": false, "temperature": 0.1 } [2024-03-16 13:38:39.232] [INFO] [LM STUDIO SERVER] Context Overflow Policy is: Rolling Window [2024-03-16 13:38:39.232] [INFO] [LM STUDIO SERVER] Last message: { role: 'user', content: 'Sure!' } (total messages = 20) [2024-03-16 13:38:44.503] [INFO] [LM STUDIO SERVER] Accumulating tokens ... (stream = false) [2024-03-16 13:38:44.503] [INFO] Accumulated 1 tokens: I [2024-03-16 13:38:44.616] [INFO] Accumulated 2 tokens: I can [2024-03-16 13:38:44.724] [INFO] [LM STUDIO SERVER] Generated prediction: { "id": "chatcmpl-xig0l1xsebeguxn6udleo", "object": "chat.completion", "created": 1710567519, "model": "/Users/ac/.cache/lm-studio/models/TheBloke/zephyr-7B-alpha-GGUF/zephyr-7b-alpha.Q8_0.gguf", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "I can" }, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 2, "completion_tokens": 2, "total_tokens": 4 } } [2024-03-16 13:45:57.679] [INFO] [LM STUDIO SERVER] Stopping server.. [2024-03-16 13:45:57.681] [INFO] [LM STUDIO SERVER] Server stopped [2024-03-16 14:03:03.542] [INFO] [LM STUDIO SERVER] Verbose server logs are ENABLED [2024-03-16 14:03:03.543] [INFO] [LM STUDIO SERVER] Heads up: you've enabled CORS. Make sure you understand the implications [2024-03-16 14:03:03.546] [INFO] [LM STUDIO SERVER] Success! HTTP server listening on port 1234 [2024-03-16 14:03:03.547] [INFO] [LM STUDIO SERVER] Supported endpoints: [2024-03-16 14:03:03.547] [INFO] [LM STUDIO SERVER] -> GET http://localhost:1234/v1/models [2024-03-16 14:03:03.547] [INFO] [LM STUDIO SERVER] -> POST http://localhost:1234/v1/chat/completions [2024-03-16 14:03:03.547] [INFO] [LM STUDIO SERVER] -> POST http://localhost:1234/v1/completions [2024-03-16 14:03:03.547] [INFO] [LM STUDIO SERVER] Logs are saved into /tmp/lmstudio-server-log.txt

@new4u
Copy link

new4u commented Mar 17, 2024

I have resolved my previous error (above) by reverting to an older version of Autogen Studio.

As for the issue bug, I tried the solution as below:
I've noticed an issue with the system message content being empty ("") and have implemented a fix for it. The issue was observed in the conversable_agent.py file at line 111. To ensure that the system message content is never empty, I've made the following changes:

# debug when system role's content=""
if system_message != "":
    content = system_message
else:
    content = "empty"
self._oai_system_message = [{"content": content, "role": "system"}]

With this update, we first check if system_message is not an empty string. If it's not, we assign system_message to content. If it is an empty string, we assign the string "empty" to content instead. This guarantees that the content field in self._oai_system_message will always have a non-empty value.

Let me know if there are any questions or further issues.

@MMoneer
Copy link

MMoneer commented Mar 28, 2024

Seeing this bug in LM Studio 0.2.18 and AutoGen Studio v0.0.56

The same issue gets separated words
firefox_28032024_046

@ithllc
Copy link

ithllc commented Apr 1, 2024

I just tested this for both Autogen Studio and Autogen (code) for version 0.2.20, and the response given by @victordibia is the correct response and the solution. There's really no change that's needed to the conversable_agent.py (this is optional). The reason is that you have to understand why the agent is sending the empty content. Using the "default_auto_reply" with a message like "am I still needed?" solves the issues and keeps the conversation flowing as expected. If you add "TERMINATE" in the auto reply it causes the agent to terminate the group chat and this could be done prematurely before the context of the PROBLEM is resolved or acknowledge by the roles. I hope this helps. There needs to be better clarity on the usage of the parameters. I would recommend closing it as both this and @victordibia response below as the answer.

#1521 (comment)

@iSte94
Copy link

iSte94 commented Apr 10, 2024

user = autogen.UserProxyAgent(
name="User",
human_input_mode="NEVER",
is_termination_msg=lambda x: x.get("content", "") and x.get("content", "").rstrip().endswith("TERMINATE"),
default_auto_reply="Reply 'TERMINATE' in the end when everything is done. ",
max_consecutive_auto_reply=5,
code_execution_config={
"work_dir": "tasks",
"use_docker": False,
}, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.
)

In the example above, the default_auto_reply should be TERMINATE, not "Reply 'TERMINATE' in the end when everything is done. ". Next you should set the assistant is_termination_msg funtion to be lambda x: x.get("content", "") and x.get("content", "").rstrip().endswith("TERMINATE"). What this does is that

  • whenever the userproxy gets a message that has no code, it responds with "TERMINATE" to the assistant.
  • the assistant receives the message "TERMINATE" and based on its is_termination_msg, it ends the converation.

Hi, where should I insert this?

whiskyboy pushed a commit to whiskyboy/autogen that referenced this issue Apr 17, 2024
…indows Testing ] (microsoft#1475)

* support groupchat, other QOL fixes

* remove gallery success toast

* Fix microsoft#1328. Add CSVLoader component and related support for rendering CSV files. Add download link in the modal for appropriate file types including CSV, Code, and PDF.

* add name and description field to session datamodel

* Update website/blog/2023-12-01-AutoGenStudio/index.mdx

Co-authored-by: Chi Wang <wang.chi@microsoft.com>

* sanitize llmconfig, remove additional fields

* improve models UX, only modify models from model tab.

* readme updates

* improve db defaults

* improve ui hover behavior and add note on models

* general qol updats

* add support for returning summary_method

* use ant design tables

* icon and layout updates

* css and layout updates

* readme updates and QOL updates

* fix bug where empty string is used as apikey microsoft#1415

* add speaker selection to UI microsoft#1373

* Fixed a bug that localAgent updates were not synchronized between GroupChatFlowSpecView and AgentFlowSpecView.

* Fixed a bug in Agent Specification Modal that caused localAgent updates to remain in state when closing a modal other than onOk.

* Fixed a bug that the updated Agent Specification Modal was not saved when the content of FlowConfigViewer Modal was changed after the Agent Specification Modal was updated when an updatedFlowConfig was created using localFlowConfig.

* add version to package

* remove sample key

* early support for versions table and testing models

* Add support for testing model when created microsoft#1404

* remove unused imports, qol updates

* fix bug on workflowmanager

* make file_name optional in skills datamodel

* update instructions on models

* fix errors from merge conflict with main

* santize workflow before download

* add support for editing skills in full fledged editor (monaco) microsoft#1442

* fix merge artifacts

* Fix build command for windows

Replaced && to & to continue execution when the 'ui' folder doesn't exist and also suppressed error "The system cannot find the file specified."

* Fix setup instructions

The config file starts with a dot (according to gatsby-config.ts).

* Throw error if env file doesn't exist

Otherwise the app will not work (issue very hard to trace)

* version bump

* formattin gupdates

* formatting updates

* Show warning instead of error if env config file doesn't exist

Fix: microsoft#1475 (comment)

* add rel noopener to a tags

* formating updates

* remove double section in readme.

* update dev readme

* format update

* add default autoreply to agent config datamodel

* add check for empty messages list

* improve groupchat behavior, add sender to list of agents

* update system_message defaults to fit autogen default system message microsoft#1474

* simplify response from test_model to only return content, fix serialization issue in microsoft#1404

* readme and other formatting updates

* add support for showing temp and default auto reply microsoft#1521

* formatting updates

* formating and other updates

---------

Co-authored-by: Paul Retherford <paul@scanpower.com>
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
Co-authored-by: junkei_okinawa <ceazy.x2.okinawan@gmail.com>
Co-authored-by: Christopher Pereira <kripper@imatronix.com>
@piotrwalczak1
Copy link

I am also observing this running autogen + local LMStudio server (model does not matter).

autogen: 0.2.35
LLStudio: 0.2.31

raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': "'messages' array must only contain objects with a 'content' field that is not empty."}

None of the listed workarounds work for me.

I just tested this for both Autogen Studio and Autogen (code) for version 0.2.20, and the response given by @victordibia is the correct response and the solution. There's really no change that's needed to the conversable_agent.py (this is optional). The reason is that you have to understand why the agent is sending the empty content. Using the "default_auto_reply" with a message like "am I still needed?" solves the issues and keeps the conversation flowing as expected. If you add "TERMINATE" in the auto reply it causes the agent to terminate the group chat and this could be done prematurely before the context of the PROBLEM is resolved or acknowledge by the roles. I hope this helps. There needs to be better clarity on the usage of the parameters. I would recommend closing it as both this and @victordibia response below as the answer.

#1521 (comment)

That does not solve the issue at all.

@rysweet rysweet added the 0.2 Issues which were filed before re-arch to 0.4 label Oct 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
0.2 Issues which were filed before re-arch to 0.4 bug Something isn't working proj-studio Related to AutoGen Studio.
Projects
None yet
Development

No branches or pull requests