Releases: microsoft/autogen
python-v0.6.4
What's New
More helps from @copilot-swe-agent for this release.
Improvements to GraphFlow
Now it behaves the same way as RoundRobinGroupChat
, SelectorGroupChat
and others after termination condition hits -- it retains its execution state and can be resumed with a new task or empty task. Only when the graph finishes execution i.e., no more next available agent to choose from, the execution state will be reset.
Also, the inner StopAgent has been removed and there will be no last message coming from the StopAgent. Instead, the stop_reason
field in the TaskResult
will carry the stop message.
- Fix GraphFlow to support multiple task execution without explicit reset by @copilot-swe-agent in #6747
- Fix GraphFlowManager termination to prevent _StopAgent from polluting conversation context by @copilot-swe-agent in #6752
Improvements to Workbench
implementations
McpWorkbench
and StaticWorkbench
now supports overriding tool names and descriptions. This allows client-side optimization of the server-side tools, for better adaptability.
- Add tool name and description override functionality to Workbench implementations by @copilot-swe-agent in #6690
All Changes
- Update documentation version by @ekzhu in #6737
- Fix function calling support for Llama3.3 by @Z1m4-blu3 in #6750
- Fix GraphFlow to support multiple task execution without explicit reset by @copilot-swe-agent in #6747
- Fix GraphFlowManager termination to prevent _StopAgent from polluting conversation context by @copilot-swe-agent in #6752
- Add tool name and description override functionality to Workbench implementations by @copilot-swe-agent in #6690
- Added DuckDuckGo Search Tool and Agent in AutoGen Extensions by @varadsrivastava in #6682
- Add script to automatically generate API documentation by @ekzhu in #6755
- Move
docs
frompython/packages/autogen-core
topython/docs
by @ekzhu in #6757 - Add reflection for claude model in AssistantAgent by @ekzhu in #6763
- Add autogen-ext-yepcode project to community projects by @marcos-muino-garcia in #6764
- Update GitHub Models url to the new url by @sgoedecke in #6759
- SingleThreadedAgentRuntime to use subclass check for factory_wrapper instead of equality by @ZenWayne in #6731
- feat: add qwen2.5vl support by @rfsousa in #6650
- Remove otel semcov package from core dependencies by @ekzhu in #6775
- Update tracing doc by @ekzhu in #6776
- Update version to 0.6.3 by @ekzhu in #6781
- Update website to 0.6.3 by @ekzhu in #6782
- Remove duckduckgo search tools and agents by @ekzhu in #6783
- Update to version 0.6.4 by @ekzhu in #6784
New Contributors
- @Z1m4-blu3 made their first contribution in #6750
- @varadsrivastava made their first contribution in #6682
- @marcos-muino-garcia made their first contribution in #6764
- @sgoedecke made their first contribution in #6759
- @rfsousa made their first contribution in #6650
Full Changelog: python-v0.6.2...python-v0.6.4
python-v0.6.2
What's New
Streaming Tools
This release introduces streaming tools and updates AgentTool
and TeamTool
to support run_json_stream
. The new interface exposes the inner events of tools when calling run_stream
of agents and teams. AssistantAgent
is also updated to use run_json_stream
when the tool supports streaming. So, when using AgentTool
or TeamTool
with AssistantAgent
, you can receive the inner agent's or team's events through the main agent.
To create new streaming tools, subclass autogen_core.tools.BaseStreamTool
and implement run_stream
. To create new streaming workbench, subclass autogen_core.tools.StreamWorkbench
and implement call_tool_stream
.
tool_choice
parameter for ChatCompletionClient
and subclasses
Introduces a new parameter tool_choice
to the ChatCompletionClient
s create
and create_stream
methods.
This is also the first PR by @copliot-swe-agent!
- Add
tool_choice
parameter toChatCompletionClient
create
andcreate_stream
methods by @copilot-swe-agent in #6697
AssistantAgent
's inner tool calling loop
Now you can enable AssistantAgent
with an inner tool calling loop by setting the max_tool_iterations
parameter through its constructor. The new implementation calls the model and executes tools until (1) the model stops generating tool calls, or (2) max_tool_iterations
has been reached. This change simplies the usage of AssistantAgent
.
- Feat/tool call loop by @tejas-dharani in #6651
OpenTelemetry GenAI Traces
This releases added new traces create_agent
, invoke_agent
, execute_tool
from the GenAI Semantic Convention.
You can also disable agent runtime traces by setting the environment variable AUTOGEN_DISABLE_RUNTIME_TRACING=true
.
output_task_messages
flag for run
and run_stream
You can use the new flag to customize whether the input task
messages get emitted as part of run_stream
of agents and teams.
- Fix output task messages 6150 by @tejas-dharani in #6678
Mem0 Extension
Added Mem0 memory extension so you can use it as memory for AutoGen agents.
- Add mem0 Memory Implementation by @alpha-xone in #6510
Improvement to GraphFlow
uv
update
We have removed the uv
version limit so you can use the latest version to develop AutoGen.
Other Python Related Changes
- SK KernelFunction from ToolSchemas by @peterychang in #6637
- docs: fix shell command with escaped brackets in pip install by @roharon in #6464
- Use yaml safe_load instead of load by @ekzhu in #6672
- Feature/chromadb embedding functions #6267 by @tejas-dharani in #6648
- docs: Memory and RAG: add missing backtick for class reference by @roysha1 in #6656
- fix: fix devcontainer issue with AGS by @victordibia in #6675
- fix: fix self-loop in workflow by @ZenWayne in #6677
- update: openai response api by @bassmang in #6622
- fix serialization issue in streamablehttp mcp tools by @victordibia in #6721
- Fix completion tokens none issue 6352 by @tejas-dharani in #6665
- Fix/broad exception handling #6280 by @tejas-dharani in #6647
- fix: enable function_calling for o1-2024-12-17 by @jeongsu-an in #6725
- Add support for Gemini 2.5 flash stable by @DavidSchmidt00 in #6692
- Feature/agentchat message id field 6317 by @tejas-dharani in #6645
- Fix mutable default in ListMemoryConfig by @mohiuddin-khan-shiam in #6729
- update version to 0.6.2 by @ekzhu in #6734
- Update agentchat documentation with latest changes by @ekzhu in #6735
New Contributors
- @roharon made their first contribution in #6464
- @tejas-dharani made their first contribution in #6648
- @roysha1 made their first contribution in #6656
- @ZenWayne made their first contribution in #6677
- @alpha-xone made their first contribution in #6510
- @jeongsu-an made their first contribution in #6725
- @DavidSchmidt00 made their first contribution in #6692
- @mohiuddin-khan-shiam made their first contribution in #6729
- @copilot-swe-agent made their first contribution in #6697
Full Changelog: python-v0.6.1...python-v0.6.2
python-v0.6.1
python-v0.6.0
What's New
Change to BaseGroupChatManager.select_speaker
and support for concurrent agents in GraphFlow
We made a type hint change to the select_speaker
method of BaseGroupChatManager
to allow for a list of agent names as a return value. This makes it possible to support concurrent agents in GraphFlow
, such as in a fan-out-fan-in pattern.
# Original signature:
async def select_speaker(self, thread: Sequence[BaseAgentEvent | BaseChatMessage]) -> str:
...
# New signature:
async def select_speaker(self, thread: Sequence[BaseAgentEvent | BaseChatMessage]) -> List[str] | str:
...
Now you can run GraphFlow
with concurrent agents as follows:
import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.conditions import MaxMessageTermination
from autogen_agentchat.teams import DiGraphBuilder, GraphFlow
from autogen_ext.models.openai import OpenAIChatCompletionClient
async def main():
# Initialize agents with OpenAI model clients.
model_client = OpenAIChatCompletionClient(model="gpt-4.1-nano")
agent_a = AssistantAgent("A", model_client=model_client, system_message="You are a helpful assistant.")
agent_b = AssistantAgent("B", model_client=model_client, system_message="Translate input to Chinese.")
agent_c = AssistantAgent("C", model_client=model_client, system_message="Translate input to Japanese.")
# Create a directed graph with fan-out flow A -> (B, C).
builder = DiGraphBuilder()
builder.add_node(agent_a).add_node(agent_b).add_node(agent_c)
builder.add_edge(agent_a, agent_b).add_edge(agent_a, agent_c)
graph = builder.build()
# Create a GraphFlow team with the directed graph.
team = GraphFlow(
participants=[agent_a, agent_b, agent_c],
graph=graph,
termination_condition=MaxMessageTermination(5),
)
# Run the team and print the events.
async for event in team.run_stream(task="Write a short story about a cat."):
print(event)
asyncio.run(main())
Agent B and C will run concurrently in separate coroutines.
Callable conditions for GraphFlow
edges
Now you can use lambda functions or other callables to specify edge conditions in GraphFlow
. This addresses the issue of the keyword substring-based conditions cannot cover all possibilities and leading to "cannot find next agent" bug.
NOTE: callable conditions are currently experimental, and it cannot be serialized with the graph.
New Agent: OpenAIAgent
- Feature: Add OpenAIAgent backed by OpenAI Response API by @jay-thakur in #6418
MCP Improvement
- Support the Streamable HTTP transport for MCP by @withsmilo in #6615
AssistantAgent
Improvement
- Add tool_call_summary_msg_format_fct and test by @ChrisBlaa in #6460
- Support multiple workbenches in assistant agent by @bassmang in #6529
Code Executors Improvement
- Add option to auto-delete temporary files in LocalCommandLineCodeExecutor by @holtvogt in #6556
- Include all output to error output in docker jupyter code executor by @ekzhu in #6572
OpenAIChatCompletionClient
Improvement
- Default usage statistics for streaming responses by @peterychang in #6578
- Add Llama API OAI compatible endpoint support by @WuhanMonkey in #6442
OllamaChatCompletionClient
Improvement
AnthropicBedrockChatCompletionClient
Improvement
- Allow implicit AWS credential setting for AnthropicBedrockChatCompletionClient by @GeorgeEfstathiadis in #6561
MagenticOneGroupChat
Improvement
Other Changes
- Update website 0.5.7 by @ekzhu in #6527
- feat: add qwen3 support by @mirpo in #6528
- Fix missing tools in logs by @afzalmushtaque in #6532
- Update to stable Microsoft.Extensions.AI release by @stephentoub in #6552
- fix: CodeExecutorAgent prompt misuse by @Dormiveglia-elf in #6559
- Update README.md by @CakeRepository in #6506
- fix:Prevent Async Event Loop from Running Indefinitely by @wfge in #6530
- Update state.ipynb, fix a grammar error by @realethanyang in #6448
- Add gemini 2.5 fash compatibility by @dmenig in #6574
- remove superfluous underline in the docs by @peterychang in #6573
- Add/fix windows install instructions by @peterychang in #6579
- Add created_at to BaseChatMessage and BaseAgentEvent by @withsmilo in #6557
- feat: Add missing Anthropic models (Claude Sonnet 4, Claude Opus 4) by @withsmilo in #6585
- Missing UserMessage import by @AlexeyKoltsov in #6583
- feat: [draft] update version of azureaiagent by @victordibia in #6581
- Add support for specifying the languages to parse from the
CodeExecutorAgent
response by @Ethan0456 in #6592 - feat: bump ags version, minor fixes by @victordibia in #6603
- note: note selector_func is not serializable by @bassmang in #6609
- Use structured output for m1 orchestrator by @ekzhu in #6540
- Parse backtick-enclosed json by @peterychang in #6607
- fix typo in the doc distributed-agent-runtime.ipynb by @bhakimiy in #6614
- Update version to 0.6.0 by @ekzhu in #6624
New Contributors
- @mirpo made their first contribution in #6528
- @ChrisBlaa made their first contribution in #6460
- @WuhanMonkey made their first contribution in #6442
- @afzalmushtaque made their first contribution in #6532
- @stephentoub made their first contribution in #6552
- @CakeRepository made their first contribution in #6506
- @wfge made their first contribution in #6530
- @realethanyang made their first contribution in #6448
- @GeorgeEfstathiadis made their first contribution in #6561
- @dmenig made their first contribution in #6574
- @holtvogt made their first contribution in #6556
- @AlexeyKoltsov made their first contribution in #6583
- @bhakimiy made their first contribution in #6614
Full Changelog: python-v0.5.7...python-v0.6.0
python-v0.5.7
What's New
AzureAISearchTool
Improvements
The Azure AI Search Tool API now features unified methods:
create_full_text_search()
(supporting"simple"
,"full"
, and"semantic"
query types)create_vector_search()
andcreate_hybrid_search()
We also added support for client-side embeddings, while defaults to service embeddings when client embeddings aren't provided.
If you have been using create_keyword_search()
, update your code to use create_full_text_search()
with "simple"
query type.
- Simplify Azure Ai Search Tool by @jay-thakur in #6511
SelectorGroupChat
Improvements
To support long context for the model-based selector in SelectorGroupChat
, you can pass in a model context object through the new model_context
parameter to customize the messages sent to the model client when selecting the next speaker.
- Add
model_context
toSelectorGroupChat
for enhanced speaker selection by @Ethan0456 in #6330
OTEL Tracing Improvements
We added new metadata and message content fields to the OTEL traces emitted by the SingleThreadedAgentRuntime
.
- improve Otel tracing by @peterychang in #6499
Agent Runtime Improvements
- Add ability to register Agent instances by @peterychang in #6131
Other Python Related Changes
- Update website 0.5.6 by @ekzhu in #6454
- Sample for integrating Core API with chainlit by @DavidYu00 in #6422
- Fix Gitty prompt message by @emmanuel-ferdman in #6473
- Fix: Move the createTeam function by @xionnon in #6487
- Update docs.yml by @victordibia in #6493
- Add gpt 4o search by @victordibia in #6492
- Fix header icons focus and hover style for better accessibility by @AndreaTang123 in #6409
- improve Otel tracing by @peterychang in #6499
- Fix AnthropicBedrockChatCompletionClient import error by @victordibia in #6489
- fix/mcp_session_auto_close_when_Mcpworkbench_deleted by @SongChiYoung in #6497
- fixes the issues where exceptions from MCP server tools aren't serial… by @peterj in #6482
- Update version 0.5.7 by @ekzhu in #6518
- FIX/mistral could not recive name field by @SongChiYoung in #6503
New Contributors
- @emmanuel-ferdman made their first contribution in #6473
Full Changelog: python-v0.5.6...python-v0.5.7
python-v0.5.6
What's New
GraphFlow: customized workflows using directed graph
Should I say finally? Yes, finally, we have workflows in AutoGen. GraphFlow
is a new team class as part of the AgentChat API. One way to think of GraphFlow
is that it is a version of SelectorGroupChat
but with a directed graph as the selector_func
. However, it is actually more powerful, because the abstraction also supports concurrent agents.
Note: GraphFlow
is still an experimental API. Watch out for changes in the future releases.
For more details, see our newly added user guide on GraphFlow.
If you are in a hurry, here is an example of creating a fan-out-fan-in workflow:
import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.teams import DiGraphBuilder, GraphFlow
from autogen_agentchat.ui import Console
from autogen_ext.models.openai import OpenAIChatCompletionClient
async def main() -> None:
# Create an OpenAI model client
client = OpenAIChatCompletionClient(model="gpt-4.1-nano")
# Create the writer agent
writer = AssistantAgent(
"writer",
model_client=client,
system_message="Draft a short paragraph on climate change.",
)
# Create two editor agents
editor1 = AssistantAgent(
"editor1", model_client=client, system_message="Edit the paragraph for grammar."
)
editor2 = AssistantAgent(
"editor2", model_client=client, system_message="Edit the paragraph for style."
)
# Create the final reviewer agent
final_reviewer = AssistantAgent(
"final_reviewer",
model_client=client,
system_message="Consolidate the grammar and style edits into a final version.",
)
# Build the workflow graph
builder = DiGraphBuilder()
builder.add_node(writer).add_node(editor1).add_node(editor2).add_node(
final_reviewer
)
# Fan-out from writer to editor1 and editor2
builder.add_edge(writer, editor1)
builder.add_edge(writer, editor2)
# Fan-in both editors into final reviewer
builder.add_edge(editor1, final_reviewer)
builder.add_edge(editor2, final_reviewer)
# Build and validate the graph
graph = builder.build()
# Create the flow
flow = GraphFlow(
participants=builder.get_participants(),
graph=graph,
)
# Run the workflow
await Console(flow.run_stream(task="Write a short biography of Steve Jobs."))
asyncio.run(main())
Major thanks to @abhinav-aegis for the initial design and implementation of this amazing feature!
- Added Graph Based Execution functionality to Autogen by @abhinav-aegis in #6333
- Aegis graph docs by @abhinav-aegis in #6417
Azure AI Agent Improvement
- Add support for Bing grounding citation URLs by @abdomohamed in #6370
New Sample
Bug Fixes:
- [FIX] DockerCommandLineCodeExecutor multi event loop aware by @SongChiYoung in #6402
- FIX: GraphFlow serialize/deserialize and adding test by @SongChiYoung in #6434
- FIX:
MultiModalMessage
in gemini with openai sdk error occured by @SongChiYoung in #6440 - FIX/McpWorkbench_errors_properties_and_grace_shutdown by @SongChiYoung in #6444
- FIX: resolving_workbench_and_tools_conflict_at_desirialize_assistant_agent by @SongChiYoung in #6407
Dev Improvement
- Speed up Docker executor unit tests: 161.66s -> 108.07 by @SongChiYoung in #6429
Other Python Related Changes
- Update website for v0.5.5 by @ekzhu in #6401
- Add more mcp workbench examples to MCP API doc by @ekzhu in #6403
- Adding bedrock chat completion for anthropic models by @HariniNarasimhan in #6170
- Add missing dependency to tracing docs by @victordibia in #6421
- docs: Clarify missing dependencies in documentation (fix #6076) by @MarsWangyang in #6406
- Bing grounding citations by @abdomohamed in #6370
- Fix: Icons are not aligned vertically. by @xionnon in #6369
- Fix: Reduce multiple H1s to H2s in Distributed Agent Runtime page by @LuluZhuu in #6412
- update autogen version 0.5.6 by @ekzhu in #6433
- fix: ensure streaming chunks are immediately flushed to console by @Dormiveglia-elf in #6424
New Contributors
- @HariniNarasimhan made their first contribution in #6170
- @MarsWangyang made their first contribution in #6406
- @xionnon made their first contribution in #6369
- @LuluZhuu made their first contribution in #6412
- @mehrsa made their first contribution in #6443
- @Dormiveglia-elf made their first contribution in #6424
Full Changelog: python-v0.5.5...python-v0.5.6
python-v0.5.5
What's New
Introduce Workbench
A workbench is a collection of tools that share state and resource. For example, you can now use MCP server through McpWorkbench
rather than using tool adapters. This makes it possible to use MCP servers that requires a shared session among the tools (e.g., login session).
Here is an example of using AssistantAgent
with GitHub MCP Server.
import asyncio
import os
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.ui import Console
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_ext.tools.mcp import McpWorkbench, StdioServerParams
async def main() -> None:
model_client = OpenAIChatCompletionClient(model="gpt-4.1-nano")
server_params = StdioServerParams(
command="docker",
args=[
"run",
"-i",
"--rm",
"-e",
"GITHUB_PERSONAL_ACCESS_TOKEN",
"ghcr.io/github/github-mcp-server",
],
env={
"GITHUB_PERSONAL_ACCESS_TOKEN": "ghp_XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX",
}
)
async with McpWorkbench(server_params) as mcp:
agent = AssistantAgent(
"github_assistant",
model_client=model_client,
workbench=mcp,
reflect_on_tool_use=True,
model_client_stream=True,
)
await Console(agent.run_stream(task="Is there a repository named Autogen"))
asyncio.run(main())
Here is another example showing a web browsing agent using Playwright MCP Server, AssistantAgent
and RoundRobinGroupChat
.
# First run `npm install -g @playwright/mcp@latest` to install the MCP server.
import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_agentchat.conditions import TextMessageTermination
from autogen_agentchat.ui import Console
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_ext.tools.mcp import McpWorkbench, StdioServerParams
async def main() -> None:
model_client = OpenAIChatCompletionClient(model="gpt-4.1-nano")
server_params = StdioServerParams(
command="npx",
args=[
"@playwright/mcp@latest",
"--headless",
],
)
async with McpWorkbench(server_params) as mcp:
agent = AssistantAgent(
"web_browsing_assistant",
model_client=model_client,
workbench=mcp,
model_client_stream=True,
)
team = RoundRobinGroupChat(
[agent],
termination_condition=TextMessageTermination(source="web_browsing_assistant"),
)
await Console(team.run_stream(task="Find out how many contributors for the microsoft/autogen repository"))
asyncio.run(main())
Read more:
- MCP Workbench API Doc
- Creating a web browsing agent using workbench, in AutoGen Core User Guide
New Sample: AutoGen and FastAPI with Streaming
- Add example using autogen-core and FastAPI for handoff multi-agent design pattern with streaming and UI by @amith-ajith in #6391
New Termination Condition: FunctionalTermination
Other Python Related Changes
- update website version by @ekzhu in #6364
- TEST/change gpt4, gpt4o serise to gpt4.1nano by @SongChiYoung in #6375
- Remove
name
field from OpenAI Assistant Message by @ekzhu in #6388 - Add guide for workbench and mcp & bug fixes for create_mcp_server_session by @ekzhu in #6392
- TEST: skip when macos+uv and adding uv venv tests by @SongChiYoung in #6387
- AssistantAgent to support Workbench by @ekzhu in #6393
- Update agent documentation by @ekzhu in #6394
- Update version to 0.5.5 by @ekzhu in #6397
- Update: implement return_value_as_string for McpToolAdapter by @perfogic in #6380
- [doc] Clarify selector prompt for SelectorGroupChat by @ekzhu in #6399
- Document custom message types in teams API docs by @ekzhu in #6400
New Contributors
- @amith-ajith made their first contribution in #6391
Full Changelog: python-v0.5.4...python-v0.5.5
python-v0.5.4
What's New
Agent and Team as Tools
You can use AgentTool
and TeamTool
to wrap agent and team into tools to be used by other agents.
import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.tools import AgentTool
from autogen_agentchat.ui import Console
from autogen_ext.models.openai import OpenAIChatCompletionClient
async def main() -> None:
model_client = OpenAIChatCompletionClient(model="gpt-4")
writer = AssistantAgent(
name="writer",
description="A writer agent for generating text.",
model_client=model_client,
system_message="Write well.",
)
writer_tool = AgentTool(agent=writer)
assistant = AssistantAgent(
name="assistant",
model_client=model_client,
tools=[writer_tool],
system_message="You are a helpful assistant.",
)
await Console(assistant.run_stream(task="Write a poem about the sea."))
asyncio.run(main())
See AgentChat Tools API for more information.
Azure AI Agent
Introducing adapter for Azure AI Agent, with support for file search, code interpreter, and more. See our Azure AI Agent Extension API.
- Add azure ai agent by @abdomohamed in #6191
Docker Jupyter Code Executor
Thinking about sandboxing your local Jupyter execution environment? We just added a new code executor to our family of code executors. See Docker Jupyter Code Executor Extension API.
- Make Docker Jupyter support to the Version 0.4 as Version 0.2 by @masquerlin in #6231
Canvas Memory
Shared "whiteboard" memory can be useful for agents to collaborate on a common artifact such code, document, or illustration. Canvas Memory is an experimental extension for sharing memory and exposing tools for agents to operate on the shared memory.
- Agentchat canvas by @lspinheiro in #6215
New Community Extensions
Updated links to new community extensions. Notably, autogen-contextplus
provides advanced model context implementations with ability to automatically summarize, truncate the model context used by agents.
- Add extentions:
autogen-oaiapi
andautogen-contextplus
by @SongChiYoung in #6338
SelectorGroupChat
Update
SelectorGroupChat
now works with models that only support streaming mode (e.g., QwQ). It can also optionally emit the inner reasoning of the model used in the selector. Set emit_team_events=True
and model_client_streaming=True
when creating SelectorGroupChat
.
- FEAT: SelectorGroupChat could using stream inner select_prompt by @SongChiYoung in #6286
CodeExecutorAgent
Update
CodeExecutorAgent
just got another refresh: it now supports max_retries_on_error
parameter. You can specify how many times it can retry and self-debug in case there is error in the code execution.
- Add self-debugging loop to
CodeExecutionAgent
by @Ethan0456 in #6306
ModelInfo
Update
- Adding
multiple_system_message
on model_info by @SongChiYoung in #6327
New Sample: AutoGen Core + FastAPI with Streaming
AGBench Update
Bug Fixes
- Bugfix: Azure AI Search Tool - fix query type by @jay-thakur in #6331
- fix: ensure serialized messages are passed to LLMStreamStartEvent by @peterj in #6344
- fix: ollama fails when tools use optional args by @peterj in #6343
- Avoid re-registering a message type already registered by @jorge-wonolo in #6354
- Fix: deserialize model_context in AssistantAgent and SocietyOfMindAgent and CodeExecutorAgent by @SongChiYoung in #6337
What's Changed
- Update website 0.5.3 by @ekzhu in #6320
- Update version 0.5.4 by @ekzhu in #6334
- Generalize Continuous SystemMessage merging via model_info[“multiple_system_messages”] instead of
startswith("gemini-")
by @SongChiYoung in #6345 - Add experimental notice to canvas by @ekzhu in #6349
- Added support for exposing GPUs to docker code executor by @millerh1 in #6339
New Contributors
- @ZHANG-EH made their first contribution in #6311
- @ToryPan made their first contribution in #6335
- @millerh1 made their first contribution in #6339
- @jorge-wonolo made their first contribution in #6354
- @abdomohamed made their first contribution in #6191
Full Changelog: python-v0.5.3...python-v0.5.4
python-v0.5.3
What's New
CodeExecutorAgent Update
Now the CodeExecutorAgent
can generate and execute code in the same invocation. See API doc for examples.
- Add code generation support to
CodeExecutorAgent
by @Ethan0456 in #6098
AssistantAgent Improvement
Now AssistantAgent
can be serialized when output_content_type
is set, thanks @abhinav-aegis's new built-in utility module autogen_core.utils
for working with JSON schema.
- Aegis structure message by @abhinav-aegis in #6289
Team Improvement
Added an optional parameter emit_team_events
to configure whether team events like SelectorSpeakerEvent
are emitted through run_stream
.
- [FEATURE] Option to emit group chat manager messages in AgentChat by @SongChiYoung in #6303
MCP Improvement
Now mcp_server_tools
factory can reuse a shared session. See example of AssistantAgent
using Playwright MCP server in the API Doc.
Console Improvement
Bug Fixes
- Fix: Azure AI Search Tool Client Lifetime Management by @jay-thakur in #6316
- Make sure thought content is included in handoff context by @ekzhu in #6319
Python Related Changes
- Update website for 0.5.2 by @ekzhu in #6299
- Bump up json-schema-to-pydantic from v0.2.3 to v0.2.4 by @withsmilo in #6300
- minor grammatical fix in docs by @codeblech in #6263
- Pin opentelemetry-proto version by @cheng-tan in #6305
- Update version to 0.5.3 by @ekzhu in #6310
- Add GPT4.1, o4-mini and o3 by @ekzhu in #6314
New Contributors
- @codeblech made their first contribution in #6263
- @amoghmc made their first contribution in #6283
- @abhinav-aegis made their first contribution in #6289
Full Changelog: python-v0.5.2...python-v0.5.3
python-v0.5.2
Python Related Changes
- Update website verison by @ekzhu in #6196
- Clean examples. by @zhanluxianshen in #6203
- Improve SocietyOfMindAgent message handling by @SongChiYoung in #6142
- redundancy code clean for agentchat by @zhanluxianshen in #6190
- added: gemini 2.5 pro preview by @ardentillumina in #6226
- chore: Add powershell path check for code executor by @lspinheiro in #6212
- Fix/transformer aware any modelfamily by @SongChiYoung in #6213
- clean codes notes for autogen-core. by @zhanluxianshen in #6218
- Docker Code Exec delete temp files by @husseinmozannar in #6211
- Fix terminations conditions. by @zhanluxianshen in #6229
- Update json_schema_to_pydantic version and make relaxed requirement on arry item. by @ekzhu in #6209
- Fix sha256_hash docstring by @scovetta in #6236
- fix: typo in usage.md by @apokusin in #6245
- Expose more Task-Centric Memory parameters by @rickyloynd-microsoft in #6246
- Bugfix/azure ai search embedding by @jay-thakur in #6248
- Add note on ModelInfo for Gemini Models by @victordibia in #6259
- [Bugfix] Fix for Issue #6241 - ChromaDB removed IncludeEnum by @mpegram3rd in #6260
- Fix ValueError: Dataclass has a union type error by @ShyamSathish005 in #6266
- Fix publish_message-method() notes by @zhanluxianshen in #6250
- Expose TCM TypedDict classes for apps to use by @rickyloynd-microsoft in #6269
- Update discover.md with adding email agent package by @masquerlin in #6274
- Update multi-agent-debate.ipynb by @larrytin in #6288
- update version 0.5.2 by @ekzhu in #6296
New Contributors
- @ardentillumina made their first contribution in #6226
- @scovetta made their first contribution in #6236
- @apokusin made their first contribution in #6245
- @mpegram3rd made their first contribution in #6260
- @ShyamSathish005 made their first contribution in #6266
- @masquerlin made their first contribution in #6274
- @larrytin made their first contribution in #6288
Full Changelog: python-v0.5.1...python-v0.5.2