Skip to content

[Feature Request]: autogen/conversable_agent.py ----- summary_args lack of options for "reflection and self-criticism" #2621

Closed
@wangruxun

Description

@wangruxun

Is your feature request related to a problem? Please describe.

DEFAULT_SUMMARY_PROMPT = "Summarize the takeaway from the conversation. Do not add any introductory phrases." in the summary_args parameter is just a simple summary of the conversation. However, reflection and self-criticism of the big model is a core capability of the intelligent agent, but this capability is not built into the conversational agent, which is unreasonable.
  "summary_method": "reflection_with_llm", this may cause misunderstandings because it is just a summary without reflection and self-criticism.It should be defined as "summary_with_llm", introducing new options represent "reflection_with_llm"

Describe the solution you'd like

  1. I suggest adding:
    DEFAULT__REFLECTION_ SELF-CRITICISM_SUMMARY_PROMPT = " Why you give the thought. Around 150 words. As a super agent, constructive self-criticism of the current machine evaluationon its weakness and strength and summarize"

(2.1)Before modification:
Supported strings are "last_msg" and "reflection_with_llm":

  • when set to "last_msg", it returns the last message of the dialog as the summary.
  • when set to "reflection_with_llm", it returns a summary extracted using an llm client.
    llm_config must be set in either the recipient or sender.
    The description of reflection_with_llm is inaccurate. Currently, it is just a summary. The name should be changed to summary_with_llm
    (2.2)After modification:
    Supported strings are "last_msg" 、 "summary_with_llm" and "reflection_with_llm"::
  • when set to "last_msg", it returns the last message of the dialog as the summary.
  • when set to "summary_with_llm", it returns a summary extracted using an llm client.
  • when set to "reflection_with_llm", it returns a reflection and self-criticism extracted using an llm client.
    llm_config must be set in either the recipient or sender.
    3、For example:
    chat_results = await user.a_initiate_chats(
    [
    {
    "chat_id": 1,
    "recipient": financial_assistant,
    "message": financial_tasks[0],
    "silent": False,
    "summary_method": "summary_with_llm",#this only contain ”summary"
    },
    {
    "chat_id": 2,
    "prerequisites": [1],
    "recipient": research_assistant,
    "message": financial_tasks[1],
    "silent": False,
    "summary_method": "reflection_with_llm" ,# this contains reflection, self-criticism and summary.
    },

Additional context

summary_with_llm

Metadata

Metadata

Assignees

No one assigned

    Labels

    0.2Issues which are related to the pre 0.4 codebaseneeds-triage

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions