diff --git a/content/zh/docs/eino/core_modules/eino_adk/agent_abstract.md b/content/zh/docs/eino/core_modules/eino_adk/agent_abstract.md deleted file mode 100644 index 5fbcaf24e77..00000000000 --- a/content/zh/docs/eino/core_modules/eino_adk/agent_abstract.md +++ /dev/null @@ -1,605 +0,0 @@ ---- -Description: "" -date: "2025-08-06" -lastmod: "" -tags: [] -title: 'Eino ADK: Agent 抽象' -weight: 2 ---- - -todo:更新 eino-examples 代码的链接引用 - -# Agent 定义 - -Eino 定义了 Agent 的基础接口,实现此接口的 Struct 可被视为一个 Agent: - -```go -// github.com/cloudwego/eino/adk/interface.go - -type Agent interface { - Name(ctx context.Context) string - Description(ctx context.Context) string - Run(ctx context.Context, input *AgentInput, opts ...AgentRunOption) *AsyncIterator[*AgentEvent] -} -``` - -
| Method | 说明 |
| Name | Agent 的名称,作为 Agent 的标识 |
| Description | Agent 的职能描述信息,主要用于让其他的 Agent 了解和判断该 Agent 的职责或功能 |
| Run | Agent 的核心执行方法,返回一个迭代器,调用者可以通过这个迭代器持续接收 Agent 产生的事件 |
-
-在 AgentOutput 中,会标明实际输出类型。
-
-## AgentRunOption
-
-AgentRunOption 由 Agent 实现定义,可以在请求维度修改 Agent 配置或者控制 Agent 行为。Eino ADK 提供了 `WrapImplSpecificOptFn` 和 `GetImplSpecificOptions` 两个方法供 Agent 定义、读取 AgentRunOption。例如可以定义 WithModelName,在请求维度修改调用的模型:
-
-```go
-// github.com/cloudwego/eino/adk/call_option.go
-// func WrapImplSpecificOptFn[T any](optFn func(*T)) AgentRunOption
-// func GetImplSpecificOptions[T any](base *T, opts ...AgentRunOption) *T
-
-import "github.com/cloudwego/eino/adk"
-
-type options struct {
- modelName string
-}
-
-func WithModelName(name string) adk.AgentRunOption {
- return adk.WrapImplSpecificOptFn(func(t *options) {
- t.modelName = name
- })
-}
-
-func (m *MyAgent) Run(ctx context.Context, input *adk.AgentInput, opts ...adk.AgentRunOption) *adk.AsyncIterator[*adk.AgentEvent] {
- o := &options{}
- o = adk.GetImplSpecificOptions(o, opts...)
- // run code...
-}
-```
-
-使用 `GetImplSpecificOptions` 方法读取 AgentRunOptions 时,与所需类型(如例子中的 options)不符的 AgentRunOption 会被忽略。
-
-AgentRunOption 具有一个 `DesignateAgent` 方法,调用该方法可以在调用多 Agent 系统时指定 Option 生效的 Agent。
-
-## AsyncIterator
-
-Agent.Run 返回了一个迭代器 AsyncIterator[*AgentEvent]:
-
-```go
-// github.com/cloudwego/eino/adk/utils.go
-
-type AsyncIterator[T any] struct {
- ...
-}
-
-func (ai *AsyncIterator[T]) Next() (T, bool) {
- ...
-}
-```
-
-它代表一个异步迭代器(异步指生产与消费之间没有同步控制),允许调用者以一种有序、阻塞的方式消费 Agent 在运行过程中产生的一系列事件。
-
-- AsyncIterator 是一个泛型结构体,可以用于迭代任何类型的数据。但在 Agent 接口中, Run 方法返回的迭代器类型被固定为 AsyncIterator[*AgentEvent] 。这意味着,你从这个迭代器中获取的每一个元素,都将是一个指向 AgentEvent 对象的指针。AgentEvent 会在后续章节中详细说明。
-- 迭代器的主要交互方式是通过调用其 Next() 方法。这个方法的行为是 阻塞式 的,每次调用 Next() ,程序会暂停执行,直到以下两种情况之一发生:
- - Agent 产生了一个新的 AgentEvent : Next() 方法会返回这个事件,调用者可以立即对其进行处理。
- - Agent 主动关闭了迭代器 : 当 Agent 不会再产生任何新的事件时(通常是 Agent 运行结束),它会关闭这个迭代器。此时, Next() 调用会结束阻塞并在第二个返回值返回 false,告知调用者迭代已经结束。
-
-AsyncIterator 常在 for 循环中处理:
-
-```go
-iter := myAgent.Run(xxx) // get AsyncIterator from Agent.Run
-
-for {
- event, ok := iter.Next()
- if !ok {
- break
- }
- // handle event
-}
-```
-
-AsyncIterator 可以由 `NewAsyncIteratorPair` 创建,该函数返回的另一个参数 AsyncGenerator 用来生产数据:
-
-```go
-// github.com/cloudwego/eino/adk/utils.go
-
-func NewAsyncIteratorPair[T any]() (*AsyncIterator[T], *AsyncGenerator[T])
-```
-
-Agent.Run 返回 AsyncIterator 旨在让调用者实时地接收到 Agent 产生的一系列 AgentEvent,因此 Agent.Run 通常会在 Goroutine 中运行 Agent 从而立刻返回 AsyncIterator 供调用者监听:
-
-```go
-import "github.com/cloudwego/eino/adk"
-
-func (m *MyAgent) Run(ctx context.Context, input *adk.AgentInput, opts ...adk.AgentRunOption) *adk.AsyncIterator[*adk.AgentEvent] {
- // handle input
- iter, gen := adk.NewAsyncIteratorPair[*adk.AgentEvent]()
- go func() {
- defer func() {
- // recover code
- gen.Close()
- }()
- // agent run code
- // gen.Send(event)
- }()
- return iter
-}
-```
-
-## AgentWithOptions
-
-使用 AgentWithOptions 方法可以在 Eino ADK Agent 做一些通用配置:
-
-```go
-// github.com/cloudwego/eino/adk/flow.go
-func AgentWithOptions(ctx context.Context, agent Agent, opts ...AgentOption) Agent
-```
-
-比如 WithDisallowTransferToParent、WithHistoryRewriter 等,具体功能将在相关的章节中详细说明。
-
-# AgentEvent
-
-AgentEvent 是 Agent 在其运行过程中产生的核心事件数据结构。其中包含了 Agent 的元信息、输出、行为和报错:
-
-```go
-// github.com/cloudwego/eino/adk/interface.go
-
-type AgentEvent struct {
- AgentName string
-
- RunPath []string
-
- Output *AgentOutput
-
- Action *AgentAction
-
- Err error
-}
-```
-
-## AgentName & RunPath
-
-AgentEvent 包含的 AgentName 和 RunPath 字段是由框架自动填充的,它们提供了关于事件来源的重要上下文信息,尤其是在复杂的、由多个 Agent 构成的系统中。
-
-- AgentName 标明了是哪一个 Agent 实例产生了当前的 AgentEvent 。
-- RunPath 记录了到达当前 Agent 的完整调用链路。RunPath 是一个字符串切片,它按顺序记录了从最初的入口 Agent 到当前产生事件的 Agent 的所有 AgentName。
-
-## Output
-
-AgentOutput 封装了 Agent 产生的输出。Message 输出被设置在 MessageOutput 字段中,其他类型的输出被设置在 CustomizedOutput 字段中:
-
-```go
-// github.com/cloudwego/eino/adk/interface.go
-
-type AgentOutput struct {
- MessageOutput *MessageVariant
-
- CustomizedOutput any
-}
-
-type MessageVariant struct {
- IsStreaming bool
-
- Message Message
- MessageStream MessageStream
- // message role: Assistant or Tool
- Role schema.RoleType
- // only used when Role is Tool
- ToolName string
-}
-```
-
-MessageOutput 字段的类型 MessageVariant 是一个核心数据结构,以下是其主要功能的分解说明:
-
-1. 统一处理流式与非流式消息
-
-Agent 的输出可能是两种形式:
-
-- 非流式 : 一次性返回一个完整的消息( Message )。
-- 流式 : 随着时间的推移,逐步返回一系列消息片段,最终构成一个完整的消息( MessageStream )。
-
-IsStreaming 是一个标志位。它的值为 true 表示当前 MessageVariant 包含的是一个流式消息(应从 MessageStream 读取),为 false 则表示包含的是一个非流式消息(应从 Message 读取)。
-
-1. 提供便捷的元数据访问
-
-Message 结构体内部包含了一些重要的元信息,如消息的 Role(Assistant 或 Tool),为了方便快速地识别消息类型和来源, MessageVariant 将这些常用的元数据提升到了顶层:
-
-- Role:消息的角色。
-- ToolName:如果消息角色是 Tool ,这个字段会直接提供工具的名称。
-
-这样做的好处是,代码在需要根据消息类型进行路由或决策时, 无需深入解析 Message 对象的具体内容 ,可以直接从 MessageVariant 的顶层字段获取所需信息,从而简化了逻辑,提高了代码的可读性和效率。
-
-## AgentAction
-
-Agent 产生包含 AgentAction 的 Event 可以控制多 Agent 协作,比如立刻退出、中断、跳转等:
-
-```go
-// github.com/cloudwego/eino/adk/interface.go
-
-type AgentAction struct {
- Exit bool
-
- Interrupted *_InterruptInfo_
-
-_ _TransferToAgent *_TransferToAgentAction_
-
-_ _CustomizedAction any
-}
-```
-
-比如:当 Agent 产生 Exit Action 时,Multi-Agent 会立刻退出;当 Agent 产生 Transfer Action 时,会跳转到目标 Agent 运行。Action 的具体用法会在相应功能介绍中说明。
-
-# 多 Agent 协作
-
-Eino ADK 提供了多 Agent 协作能力,包括由 Agent 在运行时动态决定将任务移交给其他 Agent,或者预先决定好 Agent 运行顺序。
-
-## 上下文传递
-
-在构建多 Agent 系统时,让不同 Agent 之间高效、准确地共享信息至关重要。Eino ADK 提供了两种核心的上下文传递机制,以满足不同的协作需求: History 和 SessionValues。
-
-### History
-
-多 Agent 系统中每一个 Agent 产生的 AgentEvent 都会被保存到 History 中,在调用一个新 Agent 时(Workflow/ Transfer),History 中的 AgentEvent 会被转换并拼接到 AgentInput 中。
-
-默认情况下,其他 Agent 的 Assistant 或 Tool Message,被转换为 User Message。这相当于在告诉当前的 LLM:“刚才, Agent_A 调用了 some_tool ,返回了 some_result 。现在,轮到你来决策了。”
-
-通过这种方式,其他 Agent 的行为被当作了提供给当前 Agent 的“外部信息”或“事实陈述”,而不是它自己的行为,从而避免了 LLM 的上下文混乱。
-
-
-
-在 Eino ADK 中,当为一个 Agent 构建 AgentInput 时,会对 History 中的 Event 进行过滤,确保 Agent 只会接收到当前 Agent 的直接或间接父 Agent 产生的 Event。换句话说,只有当 Event 的 RunPath “属于”当前 Agent 的 RunPath 时,该 Event 才会参与构建 Agent 的 Input。
-
-> 💡
-> RunPathA “属于” RunPathB 定义为 RunPathA 与 RunPathB 相同或者 RunPathA 是 RunPathB 的前缀
-
-#### WithHistoryRewriter
-
-通过 AgentWithOptions 可以自定义 Agent 从 History 中生成 AgentInput 的方式:
-
-```go
-// github.com/cloudwego/eino/adk/flow.go
-
-type HistoryRewriter func(ctx context.Context, entries []*HistoryEntry) ([]Message, error)
-
-func WithHistoryRewriter(h HistoryRewriter) AgentOption
-```
-
-### SessionValues
-
-SessionValues 是在一次运行中持续存在的全局临时 KV 存储,一次运行中的任何 Agent 可以在任何时间读写 SessionValues。Eino ADK 提供了三种方法访问 SessionValues:
-
-```go
-// github.com/cloudwego/eino/adk/runctx.go
-// 获取全部 SessionValues
-func GetSessionValues(ctx context.Context) map[string]any
-// 设置 SessionValues
-func SetSessionValue(ctx context.Context, key string, value any)
-// 指定 key 获取 SessionValues 中的一个值,key 不存在时第二个返回值为 false,否则为 true
-func GetSessionValue(ctx context.Context, key string) (any, bool)
-```
-
-## Transfer SubAgents
-
-Agent 运行时产生带有包含 TransferAction 的 AgentEvent 后,Eino ADK 会调用 Action 指定的 Agent,被调用的 Agent 被称为子 Agent(SubAgent)。TransferAction 可以使用 `NewTransferToAgentAction` 快速创建:
-
-```go
-import "github.com/cloudwego/eino/adk"
-
-event := adk.NewTransferToAgentAction("dest agent name")
-```
-
-为了让 Eino ADK 在接受到 TransferAction 可以找到子 Agent 实例并运行,在运行前需要先调用 `SetSubAgents` 将可能的子 Agent 注册到 Eino ADK 中:
-
-```go
-// github.com/cloudwego/eino/adk/flow.go
-func SetSubAgents(ctx context.Context, agent Agent, subAgents []Agent) (Agent, error)
-```
-
-> 💡
-> Transfer 的含义是将任务**移交**给子 Agent,而不是委托或者分配,因此:
->
-> 1. 区别于 ToolCall,通过 Transfer 调用子 Agent,子 Agent 运行结束后,不会再调用父 Agent 总结内容或进行下一步操作。
-> 2. 调用子 Agent 时,子 Agent 的输入仍然是原始输入,父 Agent 的输出会作为上下文供子 Agent 参考。
-
-以上描述中,产生 TransferAction Agent 天然清楚自己的子 Agent 有哪些,另外一些 Agent 需要根据不同场景配置不同的子 Agent,比如 Eino ADK 提供的 ChatModelAgent,这是一个通用 Agent 模板,需要根据业务实际场景配置子 Agent。这样的 Agent 需要能动态地注册父子 Agent,Eino 定义了 `OnSubAgents` 接口,用来支持此功能:
-
-```go
-// github.com/cloudwego/eino/adk/interface.go
-type OnSubAgents interface {
- OnSetSubAgents(ctx context.Context, subAgents []Agent) error
- OnSetAsSubAgent(ctx context.Context, parent Agent) error
-
- OnDisallowTransferToParent(ctx context.Context) error
-}
-```
-
-如果 Agent 实现了 `OnSubAgents` 接口,`SetSubAgents` 中会调用相应的方法向 Agent 注册。
-
-接下来以一个多功能对话 Agent 演示 Transfer 能力,目标是搭建一个可以查询天气或者与用户对话的 Agent,Agent 结构如下:
-
-
-
-三个 Agent 均使用 ChatModelAgent 实现:
-
-```go
-import (
- "context"
- "fmt"
- "log"
- "os"
-
- "github.com/cloudwego/eino-ext/components/model/openai"
- "github.com/cloudwego/eino/adk"
- "github.com/cloudwego/eino/components/model"
- "github.com/cloudwego/eino/components/tool"
- "github.com/cloudwego/eino/components/tool/utils"
- "github.com/cloudwego/eino/compose"
-)
-
-func newChatModel() model.ToolCallingChatModel {
- cm, err := openai.NewChatModel(context.Background(), &openai.ChatModelConfig{
- APIKey: os.Getenv("OPENAI_API_KEY"),
- Model: os.Getenv("OPENAI_MODEL"),
- })
- if err != nil {
- log.Fatal(err)
- }
- return cm
-}
-
-type GetWeatherInput struct {
- City string `json:"city"`
-}
-
-func NewWeatherAgent() adk.Agent {
- weatherTool, err := utils.InferTool(
- "get_weather",
- "Gets the current weather for a specific city.", // English description
- func(ctx context.Context, input *GetWeatherInput) (string, error) {
- return fmt.Sprintf(`the temperature in %s is 25°C`, input.City), nil
- },
- )
- if err != nil {
- log.Fatal(err)
- }
-
- a, err := adk.NewChatModelAgent(context.Background(), &adk.ChatModelAgentConfig{
- Name: "WeatherAgent",
- Description: "This agent can get the current weather for a given city.",
- Instruction: "Your sole purpose is to get the current weather for a given city by using the 'get_weather' tool. After calling the tool, report the result directly to the user.",
- Model: newChatModel(),
- ToolsConfig: adk.ToolsConfig{
- ToolsNodeConfig: compose.ToolsNodeConfig{
- Tools: []tool.BaseTool{weatherTool},
- },
- },
- })
- if err != nil {
- log.Fatal(err)
- }
- return a
-}
-
-func NewChatAgent() adk.Agent {
- a, err := adk.NewChatModelAgent(context.Background(), &adk.ChatModelAgentConfig{
- Name: "ChatAgent",
- Description: "A general-purpose agent for handling conversational chat.", // English description
- Instruction: "You are a friendly conversational assistant. Your role is to handle general chit-chat and answer questions that are not related to any specific tool-based tasks.",
- Model: newChatModel(),
- })
- if err != nil {
- log.Fatal(err)
- }
- return a
-}
-
-func NewRouterAgent() adk.Agent {
- a, err := adk.NewChatModelAgent(context.Background(), &adk.ChatModelAgentConfig{
- Name: "RouterAgent",
- Description: "A manual router that transfers tasks to other expert agents.",
- Instruction: `You are an intelligent task router. Your responsibility is to analyze the user's request and delegate it to the most appropriate expert agent.If no Agent can handle the task, simply inform the user it cannot be processed.`,
- Model: newChatModel(),
- })
- if err != nil {
- log.Fatal(err)
- }
- return a
-}
-```
-
-之后使用 Eino ADK 的 Transfer 能力搭建 Multi-Agent 并运行,ChatModelAgent 实现了 OnSubAgent 接口,在 adk.SetSubAgents 方法中会使用此接口向 ChatModelAgent 注册父/子 Agent,不需要用户处理 TransferAction 生成问题:
-
-```go
-import (
- "context"
- "fmt"
- "log"
- "os"
-
- "github.com/cloudwego/eino/adk"
-
- "github.com/cloudwego/eino-examples/adk/intro/transfer/internal"
-)
-
-func main() {
- weatherAgent := internal.NewWeatherAgent()
- chatAgent := internal.NewChatAgent()
- routerAgent := internal.NewRouterAgent()
-
- ctx := context.Background()
- a, err := adk.SetSubAgents(ctx, routerAgent, []adk.Agent{chatAgent, weatherAgent})
- if err != nil {
- log.Fatal(err)
- }
-
- runner := adk.NewRunner(ctx, adk.RunnerConfig{
- Agent: a,
- })
-
- // query weather
- println("\n\n>>>>>>>>>query weather<<<<<<<<<")
- iter := runner.Query(ctx, "What's the weather in Beijing?")
- for {
- event, ok := iter.Next()
- if !ok {
- break
- }
- if event.Err != nil {
- log.Fatal(event.Err)
- }
- if event.Action != nil {
- fmt.Printf("\nAgent[%s]: transfer to %+v\n\n======\n", event.AgentName, event.Action.TransferToAgent.DestAgentName)
- } else {
- fmt.Printf("\nAgent[%s]:\n%+v\n\n======\n", event.AgentName, event.Output.MessageOutput.Message)
- }
- }
-
- // failed to route
- println("\n\n>>>>>>>>>failed to route<<<<<<<<<")
- iter = runner.Query(ctx, "Book me a flight from New York to London tomorrow.")
- for {
- event, ok := iter.Next()
- if !ok {
- break
- }
- if event.Err != nil {
- log.Fatal(event.Err)
- }
- if event.Action != nil {
- fmt.Printf("\nAgent[%s]: transfer to %+v\n\n======\n", event.AgentName, event.Action.TransferToAgent.DestAgentName)
- } else {
- fmt.Printf("\nAgent[%s]:\n%+v\n\n======\n", event.AgentName, event.Output.MessageOutput.Message)
- }
- }
-}
-```
-
-得到结果:
-
->>>>>>>>>> query weather<<<<<<<<<
->>>>>>>>>>
->>>>>>>>>
->>>>>>>>
->>>>>>>
->>>>>>
->>>>>
->>>>
->>>
->>
->
-> Agent[RouterAgent]:
->
-> assistant:
->
-> tool_calls:
->
-> {Index:| 协助方式 | 描述 |
| Transfer | 直接将任务转让给另外一个 Agent,本 Agent 则执行结束后退出,不关心转让 Agent 的任务执行状态 |
| ToolCall(AgentAsTool) | 将 Agent 当成 ToolCall 调用,等待 Agent 的响应,并可获取被调用Agent 的输出结果,进行下一轮处理 |
| 上下文策略 | 描述 |
| 上游 Agent 全对话 | 获取本 Agent 的上游 Agent 的完整对话记录 |
| 全新任务描述 | 忽略掉上游 Agent 的完整对话记录,给出一个全新的任务总结,作为子 Agent 的 AgentInput 输入 |
| 决策自主性 | 描述 |
| 自主决策 | 在 Agent 内部,基于其可选的下游 Agent, 如需协助时,自主选择下游 Agent 进行协助。 一般来说,Agent 内部是基于 LLM 进行决策,不过即使是基于预设逻辑进行选择,从 Agent 外部看依然视为自主决策 |
| 预设决策 | 事先预设好一个Agent 执行任务后的下一个 Agent。 Agent 的执行顺序是事先确定、可预测的 |
+
+在 Eino ADK 中,当为一个 Agent 构建 AgentInput 时,会对 History 中的 Event 进行过滤,确保 Agent 只会接收到当前 Agent 的直接或间接父 Agent 产生的 Event。换句话说,只有当 Event 的 RunPath “属于”当前 Agent 的 RunPath 时,该 Event 才会参与构建 Agent 的 Input。
+
+> 💡
+> RunPathA “属于” RunPathB 定义为 RunPathA 与 RunPathB 相同或者 RunPathA 是 RunPathB 的前缀
+
+下面表格中给出各种编排模式下,Agent 执行时的具体 RunPath:
+
+
+
+三个 Agent 均使用 ChatModelAgent 实现:
+
+```go
+import (
+ "context"
+ "fmt"
+ "log"
+ "os"
+
+ "github.com/cloudwego/eino-ext/components/model/openai"
+ "github.com/cloudwego/eino/adk"
+ "github.com/cloudwego/eino/components/model"
+ "github.com/cloudwego/eino/components/tool"
+ "github.com/cloudwego/eino/components/tool/utils"
+ "github.com/cloudwego/eino/compose"
+)
+
+func newChatModel() model.ToolCallingChatModel {
+ cm, err := openai.NewChatModel(context.Background(), &openai.ChatModelConfig{
+ APIKey: os.Getenv("OPENAI_API_KEY"),
+ Model: os.Getenv("OPENAI_MODEL"),
+ })
+ if err != nil {
+ log.Fatal(err)
+ }
+ return cm
+}
+
+type GetWeatherInput struct {
+ City string `json:"city"`
+}
+
+func NewWeatherAgent() adk.Agent {
+ weatherTool, err := utils.InferTool(
+ "get_weather",
+ "Gets the current weather for a specific city.", // English description
+ func(ctx context.Context, input *GetWeatherInput) (string, error) {
+ return fmt.Sprintf(`the temperature in %s is 25°C`, input.City), nil
+ },
+ )
+ if err != nil {
+ log.Fatal(err)
+ }
+
+ a, err := adk.NewChatModelAgent(context.Background(), &adk.ChatModelAgentConfig{
+ Name: "WeatherAgent",
+ Description: "This agent can get the current weather for a given city.",
+ Instruction: "Your sole purpose is to get the current weather for a given city by using the 'get_weather' tool. After calling the tool, report the result directly to the user.",
+ Model: newChatModel(),
+ ToolsConfig: adk.ToolsConfig{
+ ToolsNodeConfig: compose.ToolsNodeConfig{
+ Tools: []tool.BaseTool{weatherTool},
+ },
+ },
+ })
+ if err != nil {
+ log.Fatal(err)
+ }
+ return a
+}
+
+func NewChatAgent() adk.Agent {
+ a, err := adk.NewChatModelAgent(context.Background(), &adk.ChatModelAgentConfig{
+ Name: "ChatAgent",
+ Description: "A general-purpose agent for handling conversational chat.", // English description
+ Instruction: "You are a friendly conversational assistant. Your role is to handle general chit-chat and answer questions that are not related to any specific tool-based tasks.",
+ Model: newChatModel(),
+ })
+ if err != nil {
+ log.Fatal(err)
+ }
+ return a
+}
+
+func NewRouterAgent() adk.Agent {
+ a, err := adk.NewChatModelAgent(context.Background(), &adk.ChatModelAgentConfig{
+ Name: "RouterAgent",
+ Description: "A manual router that transfers tasks to other expert agents.",
+ Instruction: `You are an intelligent task router. Your responsibility is to analyze the user's request and delegate it to the most appropriate expert agent.If no Agent can handle the task, simply inform the user it cannot be processed.`,
+ Model: newChatModel(),
+ })
+ if err != nil {
+ log.Fatal(err)
+ }
+ return a
+}
+```
+
+之后使用 Eino ADK 的 Transfer 能力搭建 Multi-Agent 并运行,ChatModelAgent 实现了 OnSubAgent 接口,在 adk.SetSubAgents 方法中会使用此接口向 ChatModelAgent 注册父/子 Agent,不需要用户处理 TransferAction 生成问题:
+
+```go
+import (
+ "context"
+ "fmt"
+ "log"
+ "os"
+
+ "github.com/cloudwego/eino/adk"
+)
+
+func main() {
+ weatherAgent := NewWeatherAgent()
+ chatAgent := NewChatAgent()
+ routerAgent := NewRouterAgent()
+
+ ctx := context.Background()
+ a, err := adk.SetSubAgents(ctx, routerAgent, []adk.Agent{chatAgent, weatherAgent})
+ if err != nil {
+ log.Fatal(err)
+ }
+
+ runner := adk.NewRunner(ctx, adk.RunnerConfig{
+ Agent: a,
+ })
+
+ // query weather
+ println("\n\n>>>>>>>>>query weather<<<<<<<<<")
+ iter := runner.Query(ctx, "What's the weather in Beijing?")
+ for {
+ event, ok := iter.Next()
+ if !ok {
+ break
+ }
+ if event.Err != nil {
+ log.Fatal(event.Err)
+ }
+ if event.Action != nil {
+ fmt.Printf("\nAgent[%s]: transfer to %+v\n\n======\n", event.AgentName, event.Action.TransferToAgent.DestAgentName)
+ } else {
+ fmt.Printf("\nAgent[%s]:\n%+v\n\n======\n", event.AgentName, event.Output.MessageOutput.Message)
+ }
+ }
+
+ // failed to route
+ println("\n\n>>>>>>>>>failed to route<<<<<<<<<")
+ iter = runner.Query(ctx, "Book me a flight from New York to London tomorrow.")
+ for {
+ event, ok := iter.Next()
+ if !ok {
+ break
+ }
+ if event.Err != nil {
+ log.Fatal(event.Err)
+ }
+ if event.Action != nil {
+ fmt.Printf("\nAgent[%s]: transfer to %+v\n\n======\n", event.AgentName, event.Action.TransferToAgent.DestAgentName)
+ } else {
+ fmt.Printf("\nAgent[%s]:\n%+v\n\n======\n", event.AgentName, event.Output.MessageOutput.Message)
+ }
+ }
+}
+```
+
+运行结果:
+
+```yaml
+>>>>>>>>>query weather<<<<<<<<<
+Agent[RouterAgent]:
+assistant:
+tool_calls:
+{Index:
+
+```go
+// github.com/cloudwego/eino/adk/prebuilt/supervisor.go
+
+type SupervisorConfig struct {
+ Supervisor adk.Agent
+ SubAgents []adk.Agent
+}
+
+func NewSupervisor(ctx context.Context, conf *SupervisorConfig) (adk.Agent, error) {
+ subAgents := make([]adk.Agent, 0, len(conf.SubAgents))
+ supervisorName := conf.Supervisor.Name(ctx)
+ for _, subAgent := range conf.SubAgents {
+ subAgents = append(subAgents, adk.AgentWithDeterministicTransferTo(ctx, &adk.DeterministicTransferConfig{
+ Agent: subAgent,
+ ToAgentNames: []string{supervisorName},
+ }))
+ }
+
+ return adk.SetSubAgents(ctx, conf.Supervisor, subAgents)
+}
+```
+
+## Workflow Agents
+
+WorkflowAgent 支持以代码中预设好的流程运行 Agents。Eino ADK 提供了三种基础 Workflow Agent:Sequential、Parallel、Loop,它们之间可以互相嵌套以完成更复杂的任务。
+
+默认情况下,Workflow 中每个 Agent 的输入由 History 章节中介绍的方式生成,可以通过 WithHistoryRewriter 自定 AgentInput 生成方式。
+
+当 Agent 产生 ExitAction Event 后,Workflow Agent 会立刻退出,无论之后有没有其他需要运行的 Agent。
+
+详解与用例参考请见:[Eino ADK: Workflow Agents](/zh/docs/eino/core_modules/eino_adk/agent_implementation/workflow)
+
+### SequentialAgent
+
+SequentialAgent 会按照你提供的顺序,依次执行一系列 Agent:
+
+
+
+```go
+type SequentialAgentConfig struct {
+ Name string
+ Description string
+ SubAgents []Agent
+}
+
+func NewSequentialAgent(ctx context.Context, config *SequentialAgentConfig) (Agent, error)
+```
+
+### LoopAgent
+
+LoopAgent 基于 SequentialAgent 实现,在 SequentialAgent 运行完成后,再次从头运行:
+
+
+
+```go
+type LoopAgentConfig struct {
+ Name string
+ Description string
+ SubAgents []Agent
+
+ MaxIterations int // 最大循环次数
+}
+
+func NewLoopAgent(ctx context.Context, config *LoopAgentConfig) (Agent, error)
+```
+
+### ParallelAgent
+
+ParallelAgent 会并发运行若干 Agent:
+
+
+
+```go
+type ParallelAgentConfig struct {
+ Name string
+ Description string
+ SubAgents []Agent
+}
+
+func NewParallelAgent(ctx context.Context, config *ParallelAgentConfig) (Agent, error)
+```
+
+## AgentAsTool
+
+当 Agent 运行仅需要明确清晰的指令,而非完整运行上下文(History)时,该 Agent 可以转换为 Tool 进行调用:
+
+```go
+func NewAgentTool(_ context.Context, agent Agent, options ...AgentToolOption) tool.BaseTool
+```
+
+转换为 Tool 后,Agent 可以被支持 function calling 的 ChatModel 调用,也可以被所有基于 LLM 驱动的 Agent 调用,调用方式取决于 Agent 实现。
diff --git a/content/zh/docs/eino/core_modules/eino_adk/agent_extension.md b/content/zh/docs/eino/core_modules/eino_adk/agent_extension.md
index e0d908abfb4..35ba8da44c4 100644
--- a/content/zh/docs/eino/core_modules/eino_adk/agent_extension.md
+++ b/content/zh/docs/eino/core_modules/eino_adk/agent_extension.md
@@ -1,23 +1,35 @@
---
Description: ""
-date: "2025-08-06"
+date: "2025-09-30"
lastmod: ""
tags: []
-title: 'Eino ADK: Agent 扩展'
-weight: 3
+title: 'Eino ADK: Agent Runner 与扩展'
+weight: 6
---
-# Agent Runner
+## Agent Runner
-Runner 是 Eino ADK 中负责执行 Agent 的核心引擎。它的主要作用是管理和控制 Agent 的整个生命周期,如处理多 Agent 协作,保存传递上下文等,interrupt、callback 等切面能力也均依赖 Runner 实现。任何 Agent 都应通过 Runner 来运行。
+### 定义
-# Interrupt & Resume
+Runner 是 Eino ADK 中负责执行 Agent 的核心引擎。它的主要作用是管理和控制 Agent 的整个生命周期,如处理多 Agent 协作,保存传递上下文等,interrupt、callback 等切面能力也均依赖 Runner 实现。
-该功能允许一个正在运行的 Agent 主动中断其执行,保存当前状态,并在稍后从中断点恢复执行。这对于处理需要外部输入、长时间等待或可暂停的任务流非常有用。
+任何 Agent 都应通过 Runner 来运行。
-## Interrupted Action
+### Interrupt & Resume
-在 Agent 的执行过程中,可以通过产生包含 Interrupted Action 的 AgentEvent 来主动中断 Runner 的运行:
+Agent Runner 提供运行时中断与恢复的功能,该功能允许一个正在运行的 Agent 主动中断其执行并保存当前状态,支持从中断点恢复执行。该功能常用于 Agent 处理流程中需要外部输入、长时间等待或可暂停等场景。
+
+下面将对一次中断到恢复过程中的三个关键点进行介绍:
+
+1. Interrupted Action:由 Agent 抛出中断事件,Agent Runner 拦截
+2. Checkpoint:Agent Runner 拦截事件后保存当前运行状态
+3. Resume:运行条件重新 ready 后,由 Agent Runner 从断点恢复运行
+
+### Interrupted Action
+
+在 Agent 的执行过程中,可以通过产生包含 Interrupted Action 的 AgentEvent 来主动中断 Runner 的运行。
+
+当 Event 中的 Interrupted 不为空时,Agent Runner 便会认为发生中断:
```go
// github.com/cloudwego/eino/adk/interface.go
@@ -38,7 +50,16 @@ type InterruptInfo struct {
1. 会被传递给调用者,可以通过该信息向调用者说明中断原因等
2. 如果后续需要恢复 Agent 运行,InterruptInfo 会在恢复时重新传递给中断的 Agent,Agent 可以依据该信息恢复运行
-## 状态持久化 (Checkpoint)
+```go
+// 例如 ChatModelAgent 中断时,会发送如下的 AgentEvent:
+h.Send(&AgentEvent{AgentName: h.agentName, Action: &AgentAction{
+ Interrupted: &InterruptInfo{
+ Data: &ChatModelAgentInterruptInfo{Data: data, Info: info},
+ },
+}})
+```
+
+### 状态持久化 (Checkpoint)
当 Runner 捕获到这个带有 Interrupted Action 的 Event 时,会立即终止当前的执行流程。 如果:
@@ -70,7 +91,7 @@ Runner 在终止运行后会将当前运行状态(原始输入、对话历史
> 💡
> 为了保存 interface 中数据的原本类型,Eino ADK 使用 gob([https://pkg.go.dev/encoding/gob](https://pkg.go.dev/encoding/gob))序列化运行状态。因此在使用自定义类型时需要提前使用 gob.Register 或 gob.RegisterName 注册类型(更推荐后者,前者使用路径加类型名作为默认名字,因此类型的位置和名字均不能发生变更)。Eino 会自动注册框架内置的类型。
-## Resume
+### Resume
运行中断,调用 Runner 的 Resume 接口传入中断时的 CheckPointID 可以恢复运行:
@@ -97,7 +118,3 @@ type ResumeInfo struct {
```
Resume 如果向 Agent 传入新信息,可以定义 AgentRunOption,在调用 Runner.Resume 时传入。
-
-# Callback
-
-TODO
diff --git a/content/zh/docs/eino/core_modules/eino_adk/agent_implementation/_index.md b/content/zh/docs/eino/core_modules/eino_adk/agent_implementation/_index.md
index d6a93850ad9..261f75ee708 100644
--- a/content/zh/docs/eino/core_modules/eino_adk/agent_implementation/_index.md
+++ b/content/zh/docs/eino/core_modules/eino_adk/agent_implementation/_index.md
@@ -1,12 +1,12 @@
---
Description: ""
-date: "2025-08-06"
+date: "2025-09-30"
lastmod: ""
tags: []
title: 'Eino ADK: Agent 实现'
-weight: 4
+weight: 5
---
用户可以通过实现 Agent 接口自定义 Agent。自定义 Agent 建议严格遵守上述规则,在应用、迭代、合作中可以带来便利。
-简单自定义 Agent 可以参考: github.com/cloudwego/eino-examples/adk/intro/custom/myagent.go
+简单自定义 Agent 可以参考: [https://github.com/cloudwego/eino-examples/blob/main/adk/intro/custom/myagent.go](https://github.com/cloudwego/eino-examples/blob/main/adk/intro/custom/myagent.go)
diff --git a/content/zh/docs/eino/core_modules/eino_adk/agent_implementation/chat_model.md b/content/zh/docs/eino/core_modules/eino_adk/agent_implementation/chat_model.md
new file mode 100644
index 00000000000..5bab1dee30b
--- /dev/null
+++ b/content/zh/docs/eino/core_modules/eino_adk/agent_implementation/chat_model.md
@@ -0,0 +1,601 @@
+---
+Description: ""
+date: "2025-09-30"
+lastmod: ""
+tags: []
+title: 'Eino ADK: ChatModelAgent'
+weight: 1
+---
+
+## ChatModelAgent 概述
+
+### Import Path
+
+```
+import github.com/cloudwego/eino/adk
+```
+
+### 什么是 ChatModelAgent
+
+`ChatModelAgent` 是 Eino ADK 中的一个核心预构建 的 Agent,它封装了与大语言模型(LLM)进行交互、并支持使用工具来完成任务的复杂逻辑。
+
+### ChatModelAgent ReAct 模式
+
+`ChatModelAgent` 内使用了 [ReAct](https://react-lm.github.io/) 模式,该模式旨在通过让 ChatModel 进行显式的、一步一步的“思考”来解决复杂问题。为 `ChatModelAgent` 配置了工具后,它在内部的执行流程就遵循了 ReAct 模式:
+
+- 调用 ChatModel(Reason)
+- LLM 返回工具调用请求(Action)
+- ChatModelAgent 执行工具(Act)
+- 它将工具结果返回给 ChatModel(Observation),然后开始新的循环,直到 ChatModel 判断不需要调用 Tool 结束。
+
+当没有配置工具时,`ChatModelAgent` 退化为一次 ChatModel 调用。
+
+
+
+可以通过 ToolsConfig 为 ChatModelAgent 配置 Tool:
+
+```go
+// github.com/cloudwego/eino/adk/chatmodel.go
+
+type ToolsConfig struct {
+ compose.ToolsNodeConfig
+
+ // Names of the tools that will make agent return directly when the tool is called.
+ // When multiple tools are called and more than one tool is in the return directly list, only the first one will be returned.
+ ReturnDirectly map[string]bool
+}
+```
+
+ToolsConfig 复用了 Eino Graph ToolsNodeConfig,详细参考:[Eino: ToolsNode&Tool 使用说明](/zh/docs/eino/core_modules/components/tools_node_guide)。额外提供了 ReturnDirectly 配置,ChatModelAgent 调用配置在 ReturnDirectly 中的 Tool 后会直接退出。
+
+### ChatModelAgent 配置字段
+
+```go
+type ChatModelAgentConfig struct {
+ // Name of the agent. Better be unique across all agents.
+ Name string
+ // Description of the agent's capabilities.
+ // Helps other agents determine whether to transfer tasks to this agent.
+ Description string
+ // Instruction used as the system prompt for this agent.
+ // Optional. If empty, no system prompt will be used.
+ // Supports f-string placeholders for session values in default GenModelInput, for example:
+ // "You are a helpful assistant. The current time is {Time}. The current user is {User}."
+ // These placeholders will be replaced with session values for "Time" and "User".
+ Instruction string
+
+ Model model.ToolCallingChatModel
+
+ ToolsConfig ToolsConfig
+
+ // GenModelInput transforms instructions and input messages into the model's input format.
+ // Optional. Defaults to defaultGenModelInput which combines instruction and messages.
+ GenModelInput GenModelInput
+
+ // Exit defines the tool used to terminate the agent process.
+ // Optional. If nil, no Exit Action will be generated.
+ // You can use the provided 'ExitTool' implementation directly.
+ Exit tool.BaseTool
+
+ // OutputKey stores the agent's response in the session.
+ // Optional. When set, stores output via AddSessionValue(ctx, outputKey, msg.Content).
+ OutputKey string
+
+ // MaxIterations defines the upper limit of ChatModel generation cycles.
+ // The agent will terminate with an error if this limit is exceeded.
+ // Optional. Defaults to 20.
+ MaxIterations int
+}
+
+type ToolsConfig struct {
+ compose.ToolsNodeConfig
+
+ // Names of the tools that will make agent return directly when the tool is called.
+ // When multiple tools are called and more than one tool is in the return directly list, only the first one will be returned.
+ ReturnDirectly map[string]bool
+}
+
+type GenModelInput func(ctx context.Context, instruction string, input *AgentInput) ([]Message, error)
+```
+
+- `Name`:Agent 名称
+- `Description`:Agent 描述
+- `Instruction`:调用 ChatModel 时的 System Prompt,支持 f-string 渲染
+- `Model`:运行所使用的 ChatModel,要求支持工具调用
+- `ToolsConfig`:工具配置
+ - ToolsConfig 复用了 Eino Graph ToolsNodeConfig,详细参考:[Eino: ToolsNode&Tool 使用说明](/zh/docs/eino/core_modules/components/tools_node_guide)。
+ - ReturnDirectly:当 ChatModelAgent 调用配置在 ReturnDirectly 中的 Tool 后,将携带结果立刻退出,不会按照 react 模式返回 ChatModel。如果命中了多个 Tool,只有首个 Tool 会返回。Map key 为 Tool 名称。
+- `GenModelInput`:Agent 被调用时会使用该方法将 `Instruction` 和 `AgentInput` 转换为调用 ChatModel 的 Messages。Agent 提供了默认的 GenModelInput 方法:
+ 1. 将 `Instruction` 作为 `System Message` 加到 `AgentInput.Messages` 前
+ 2. 将 `SessionValues` 为 variables 渲染到步骤 1 的 message list 中
+- `OutputKey`:配置后,ChatModelAgent 运行产生的最后一条 Message 将会以 `OutputKey` 为 key 设置到 `SessionValues` 中
+- `MaxIterations`:react 模式下 ChatModel 最大生成次数,超过时 Agent 会报错退出,默认值为 20
+- `Exit`:Exit 是一个特殊的 Tool,当模型调用这个工具并执行后,ChatModelAgent 将直接退出,效果与 `ToolsConfig.ReturnDirectly` 类似。ADK 提供了一个默认 ExitTool 实现供用户使用:
+
+```go
+type ExitTool struct{}
+
+func (et ExitTool) Info(_ context.Context) (*schema.ToolInfo, error) {
+ return ToolInfoExit, nil
+}
+
+func (et ExitTool) InvokableRun(ctx context.Context, argumentsInJSON string, _ ...tool.Option) (string, error) {
+ type exitParams struct {
+ FinalResult string `json:"final_result"`
+ }
+
+ params := &exitParams{}
+ err := sonic.UnmarshalString(argumentsInJSON, params)
+ if err != nil {
+ return "", err
+ }
+
+ err = SendToolGenAction(ctx, "exit", NewExitAction())
+ if err != nil {
+ return "", err
+ }
+
+ return params.FinalResult, nil
+}
+```
+
+### ChatModelAgent Transfer
+
+`ChatModelAgent` 支持将其他 Agent 的元信息转为自身的 Tool ,经由 ChatModel 判断实现动态 Transfer:
+
+- `ChatModelAgent` 实现了 `OnSubAgents` 接口,使用 `SetSubAgents` 为 `ChatModelAgent` 设置子 Agents 后,`ChatModelAgent` 会增加一个 `Transfer Tool`,并且在 prompt 中指示 ChatModel 在需要 transfer 时调用这个 Tool 并以 transfer 目标 AgentName 作为 Tool 输入。
+
+```go
+const (
+ _TransferToAgentInstruction _= `Available other agents: %s
+
+Decision rule:
+- If you're best suited for the question according to your description: ANSWER
+- If another agent is better according its description: CALL '%s' function with their agent name
+
+When transferring: OUTPUT ONLY THE FUNCTION CALL`
+)
+
+func genTransferToAgentInstruction(ctx context.Context, agents []Agent) string {
+ var sb strings.Builder
+ for _, agent := range agents {
+ sb.WriteString(fmt.Sprintf("\n- Agent name: %s\n Agent description: %s",
+ agent.Name(ctx), agent.Description(ctx)))
+ }
+
+ return fmt.Sprintf(_TransferToAgentInstruction_, sb.String(), _TransferToAgentToolName_)
+}
+```
+
+- `Transfer Tool` 运行会设置 Transfer Event,指定跳转到目标 Agent 上,完成后 ChatModelAgent 退出。
+- Agent Runner 接收到 Transfer Event 后,跳转到目标 Agent 上执行,完成 Transfer 操作
+
+### ChatModelAgent AgentAsTool
+
+当需要被调用的 Agent 不需要完整的运行上下文,仅需要明确清晰的入参即可正确运行时,该 Agent 可以转换为 Tool 交由 `ChatModelAgent` 判断调用:
+
+- ADK 中提供了工具方法,可以方便地将 Eino ADK Agent 转化为 Tool 供 ChatModelAgent 调用:
+
+```go
+// github.com/cloudwego/eino/adk/agent_tool.go
+
+func NewAgentTool(_ context.Context, agent Agent, options ...AgentToolOption) tool.BaseTool
+```
+
+- 被转换为 Tool 后的 Agent 可以通过 `ToolsConfig` 直接注册在 ChatModelAgent 中
+
+```go
+bookRecommender := NewBookRecommendAgent()
+bookRecommendeTool := NewAgentTool(ctx, bookRecommender)
+
+a, err := adk.NewChatModelAgent(ctx, &adk.ChatModelAgentConfig{
+ // ...
+ ToolsConfig: adk.ToolsConfig{
+ ToolsNodeConfig: compose.ToolsNodeConfig{
+ Tools: []tool.BaseTool{bookRecommendeTool},
+ },
+ },
+})
+```
+
+## ChatModelAgent 使用示例
+
+### 场景说明
+
+创建一个图书推荐 Agent,Agent 将能够根据用户的输入推荐相关图书。
+
+
+### 步骤 1: 定义工具
+
+图书推荐 Agent 需要一个根据能够根据用户要求(题材、评分等)检索图书的工具 `book_search` 。
+
+利用 Eino 提供的工具方法可以方便地创建(可参考[如何创建一个 tool ?](/zh/docs/eino/core_modules/components/tools_node_guide/how_to_create_a_tool)):
+
+```go
+import (
+ "context"
+ "log"
+
+ "github.com/cloudwego/eino/components/tool"
+ "github.com/cloudwego/eino/components/tool/utils"
+)
+
+type BookSearchInput struct {
+ Genre string `json:"genre" jsonschema:"description=Preferred book genre,enum=fiction,enum=sci-fi,enum=mystery,enum=biography,enum=business"`
+ MaxPages int `json:"max_pages" jsonschema:"description=Maximum page length (0 for no limit)"`
+ MinRating int `json:"min_rating" jsonschema:"description=Minimum user rating (0-5 scale)"`
+}
+
+type BookSearchOutput struct {
+ Books []string
+}
+
+func NewBookRecommender() tool.InvokableTool {
+ bookSearchTool, err := utils.InferTool("search_book", "Search books based on user preferences", func(ctx context.Context, input *BookSearchInput) (output *BookSearchOutput, err error) {
+ // search code
+ // ...
+ return &BookSearchOutput{Books: []string{"God's blessing on this wonderful world!"}}, nil
+ })
+ if err != nil {
+ log.Fatalf("failed to create search book tool: %v", err)
+ }
+ return bookSearchTool
+}
+```
+
+### 步骤 2: 创建 ChatModel
+
+Eino 提供了多种 ChatModel 封装(如 openai、gemini、doubao 等,详见 [Eino: ChatModel 使用说明](/zh/docs/eino/core_modules/components/chat_model_guide)),这里以 openai ChatModel 为例:
+
+```go
+import (
+ "context"
+ "fmt"
+ "log"
+ "os"
+
+ "github.com/cloudwego/eino-ext/components/model/openai"
+ "github.com/cloudwego/eino/components/model"
+)
+
+func NewChatModel() model.ToolCallingChatModel {
+ ctx := context.Background()
+ apiKey := os.Getenv("OPENAI_API_KEY")
+ openaiModel := os.Getenv("OPENAI_MODEL")
+
+ cm, err := openai.NewChatModel(ctx, &openai.ChatModelConfig{
+ APIKey: apiKey,
+ Model: openaiModel,
+ })
+ if err != nil {
+ log.Fatal(fmt.Errorf("failed to create chatmodel: %w", err))
+ }
+ return cm
+}
+```
+
+### 步骤 3: 创建 ChatModelAgent
+
+除了配置 ChatModel 和工具外,还需要配置描述 Agent 功能用途的 Name 和 Description,以及指示 ChatModel 的 Instruction,Instruction 最终会作为 system message 被传递给 ChatModel。
+
+```go
+import (
+ "context"
+ "fmt"
+ "log"
+
+ "github.com/cloudwego/eino/adk"
+ "github.com/cloudwego/eino/components/tool"
+ "github.com/cloudwego/eino/compose"
+)
+
+func NewBookRecommendAgent() adk.Agent {
+ ctx := context.Background()
+
+ a, err := adk.NewChatModelAgent(ctx, &adk.ChatModelAgentConfig{
+ Name: "BookRecommender",
+ Description: "An agent that can recommend books",
+ Instruction: `You are an expert book recommender. Based on the user's request, use the "search_book" tool to find relevant books. Finally, present the results to the user.`,
+ Model: NewChatModel(),
+ ToolsConfig: adk.ToolsConfig{
+ ToolsNodeConfig: compose.ToolsNodeConfig{
+ Tools: []tool.BaseTool{NewBookRecommender()},
+ },
+ },
+ })
+ if err != nil {
+ log.Fatal(fmt.Errorf("failed to create chatmodel: %w", err))
+ }
+
+ return a
+}
+```
+
+
+### 步骤 4: 通过 Runner 运行
+
+```go
+import (
+ "context"
+ "fmt"
+ "log"
+ "os"
+
+ "github.com/cloudwego/eino/adk"
+
+ "github.com/cloudwego/eino-examples/adk/intro/chatmodel/internal"
+)
+
+func main() {
+ ctx := context.Background()
+ a := internal.NewBookRecommendAgent()
+ runner := adk.NewRunner(ctx, adk.RunnerConfig{
+ Agent: a,
+ })
+ iter := runner.Query(ctx, "recommend a fiction book to me")
+ for {
+ event, ok := iter.Next()
+ if !ok {
+ break
+ }
+ if event.Err != nil {
+ log.Fatal(event.Err)
+ }
+ msg, err := event.Output.MessageOutput.GetMessage()
+ if err != nil {
+ log.Fatal(err)
+ }
+ fmt.Printf("\nmessage:\n%v\n======", msg)
+ }
+}
+```
+
+### 运行结果
+
+```yaml
+message:
+assistant:
+tool_calls:
+{Index:
-
-可以通过 ToolsConfig 为 ChatModelAgent 配置 Tool:
-
-```go
-// github.com/cloudwego/eino/adk/chatmodel.go
-
-type ToolsConfig struct {
- compose.ToolsNodeConfig
-
- // Names of the tools that will make agent return directly when the tool is called.
- // When multiple tools are called and more than one tool is in the return directly list, only the first one will be returned.
- ReturnDirectly map[string]bool
-}
-```
-
-ToolsConfig 复用了 Eino Graph ToolsNodeConfig,详细参考:[Eino: ToolsNode&Tool 使用说明](/zh/docs/eino/core_modules/components/tools_node_guide)。额外提供了 ReturnDirectly 配置,ChatModelAgent 调用配置在 ReturnDirectly 中的 Tool 后会直接退出。
-
-当没有配置工具时,ChatModelAgent 退化为一次 ChatModel 调用。
-
-# GenModelInput
-
-ChatModelAgent 创建时可以配置 GenModelInput,Agent 被调用时会使用该方法生成 ChatModel 的初始输入:
-
-```
-type GenModelInput func(ctx context.Context, instruction string, input *AgentInput) ([]Message, error)
-```
-
-Agent 提供了默认的 GenModelInput 方法:
-
-1. 将 Instruction 作为 system message 加到 AgentInput.Messages 前
-2. 以 SessionValues 为 variables 渲染 1 中得到的 message list
-
-# OutputKey
-
-ChatModelAgent 创建时可以配置 OutputKey,配置后 Agent 产生的最后一个 message 会被以设置的 OutputKey 为 key 添加到 SessionValues 中。
-
-# Exit
-
-Exit 字段支持配置一个 Tool,当 LLM 调用这个工具后并执行后,ChatModelAgent 将直接退出,效果类似 ToolReturnDirectly。Eino ADK 提供了一个 ExitTool,用户可以直接使用:
-
-```go
-// github.com/cloudwego/eino/adk/chatmodel.go
-
-type ExitTool struct{}
-
-func (et ExitTool) Info(_ context.Context) (*schema.ToolInfo, error) {
- return ToolInfoExit, nil
-}
-
-func (et ExitTool) InvokableRun(ctx context.Context, argumentsInJSON string, _ ...tool.Option) (string, error) {
- type exitParams struct {
- FinalResult string `json:"final_result"`
- }
-
- params := &exitParams{}
- err := sonic.UnmarshalString(argumentsInJSON, params)
- if err != nil {
- return "", err
- }
-
- err = SendToolGenAction(ctx, "exit", NewExitAction())
- if err != nil {
- return "", err
- }
-
- return params.FinalResult, nil
-}
-```
-
-# Transfer
-
-ChatModelAgent 实现了 OnSubAgents 接口,使用 SetSubAgents 为 ChatModelAgent 设置父或子 Agent 后,ChatModelAgent 会增加一个 Transfer Tool,并且在 prompt 中指示 ChatModel 在需要 transfer 时调用这个 Tool 并以 transfer 目标 AgentName 作为 Tool 输入。在此工具被调用后,Agent 会产生 TransferAction 并退出。
-
-# AgentTool
-
-ChatModelAgent 提供了工具方法,可以方便地将 Eino ADK Agent 转化为 Tool 供 ChatModelAgent 调用:
-
-```go
-// github.com/cloudwego/eino/adk/agent_tool.go
-
-func NewAgentTool(_ context.Context, agent Agent, options ...AgentToolOption) tool.BaseTool
-```
-
-比如之前创建的 `BookRecommendAgent` 可以使用 NewAgentTool 方法转换为 Tool,并被其他 Agent 调用:
-
-```go
-bookRecommender := NewBookRecommendAgent()
-bookRecommendeTool := NewAgentTool(ctx, bookRecommender)
-
-// other agent
-a, err := adk.NewChatModelAgent(ctx, &adk.ChatModelAgentConfig{
- // xxx
- ToolsConfig: adk.ToolsConfig{
- ToolsNodeConfig: compose.ToolsNodeConfig{
- Tools: []tool.BaseTool{bookRecommendeTool},
- },
- },
-})
-```
-
-# Interrupt&Resume
-
-ChatModelAgent 支持 Interrupt&Resume,我们给 BookRecommendAgent 增加一个工具 `ask_for_clarification`,当用户提供的信息不足以支持推荐时,Agent 将调用这个工具向用户询问更多信息,`ask_for_clarification` 使用了 Interrupt&Resume 能力来实现向用户“询问”。
-
-ChatModelAgent 使用了 Eino Graph 实现,在 agent 中可以复用 Eino Graph 的 Interrupt&Resume 能力,工具返回特殊错误使 Graph 触发中断并向外抛出自定义信息,在恢复时 Graph 会重新运行此工具:
-
-```go
-// github.com/cloudwego/eino/adk/interrupt.go
-
-func NewInterruptAndRerunErr(extra any) error
-```
-
-另外定义 ToolOption 来在恢复时传递新输入:
-
-```go
-import (
- "github.com/cloudwego/eino/components/tool"
-)
-
-type askForClarificationOptions struct {
- NewInput *string
-}
-
-func WithNewInput(input string) tool.Option {
- return tool.WrapImplSpecificOptFn(func(t *askForClarificationOptions) {
- t.NewInput = &input
- })
-}
-```
-
-> 💡
-> 定义 tool option 不是必须的,实践时可以根据 context、闭包等其他方式传递新输入
-
-完整的 Tool 实现如下:
-
-```go
-import (
- "context"
- "log"
-
- "github.com/cloudwego/eino/components/tool"
- "github.com/cloudwego/eino/components/tool/utils"
- "github.com/cloudwego/eino/compose"
-)
-
-type askForClarificationOptions struct {
- NewInput *string
-}
-
-func WithNewInput(input string) tool.Option {
- return tool.WrapImplSpecificOptFn(func(t *askForClarificationOptions) {
- t.NewInput = &input
- })
-}
-
-type AskForClarificationInput struct {
- Question string `json:"question" jsonschema:"description=The specific question you want to ask the user to get the missing information"`
-}
-
-func NewAskForClarificationTool() tool.InvokableTool {
- t, err := utils.InferOptionableTool(
- "ask_for_clarification",
- "Call this tool when the user's request is ambiguous or lacks the necessary information to proceed. Use it to ask a follow-up question to get the details you need, such as the book's genre, before you can use other tools effectively.",
- func(ctx context.Context, input *AskForClarificationInput, opts ...tool.Option) (output string, err error) {
- o := tool.GetImplSpecificOptions[askForClarificationOptions](nil, opts...)
- if o.NewInput == nil {
- return "", compose.NewInterruptAndRerunErr(input.Question)
- }
- return *o.NewInput, nil
- })
- if err != nil {
- log.Fatal(err)
- }
- return t
-}
-```
-
-将 `ask_for_clarification` 添加到之前的 Agent 中:
-
-```go
-func NewBookRecommendAgent() adk.Agent {
- // xxx
- a, err := adk.NewChatModelAgent(ctx, &adk.ChatModelAgentConfig{
- // xxx
- ToolsConfig: adk.ToolsConfig{
- ToolsNodeConfig: compose.ToolsNodeConfig{
- Tools: []tool.BaseTool{NewBookRecommender(), NewAskForClarificationTool()},
- },
- },
- })
- // xxx
-}
-```
-
-之后在 Runner 中配置 CheckPointStore(例子中使用最简单的 InMemoryStore),并在调用 Agent 时传入 CheckPointID,用来在恢复时使用。eino Graph 在中断时,会把 Graph 的 InterruptInfo 放入 Interrupted.Data 中:
-
-```go
-func main() {
- ctx := context.Background()
- a := internal.NewBookRecommendAgent()
- runner := adk.NewRunner(ctx, adk.RunnerConfig{
- Agent: a,
- CheckPointStore: newInMemoryStore(),
- })
- iter := runner.Query(ctx, "recommend a book to me", adk.WithCheckPointID("1"))
- for {
- event, ok := iter.Next()
- if !ok {
- break
- }
- if event.Err != nil {
- log.Fatal(event.Err)
- }
- if event.Action != nil && event.Action.Interrupted != nil {
- fmt.Printf("\ninterrupt happened, info: %+v\n", event.Action.Interrupted.Data.(*adk.ChatModelAgentInterruptInfo).Info.RerunNodesExtra["ToolNode"])
- continue
- }
- msg, err := event.Output.MessageOutput.GetMessage()
- if err != nil {
- log.Fatal(err)
- }
- fmt.Printf("\nmessage:\n%v\n======\n\n", msg)
- }
-
- // xxxxxx
-}
-```
-
-可以在中断看到输出:
-
-> message:
-> assistant:
-> tool_calls:
-> {Index:
+
+Plan-Execute Agent 适用于需要多步骤推理、工具集成或动态调整策略的场景(如研究分析、复杂问题解决、自动化工作流等),其核心优势在于:
+
+- **结构化规划**:将复杂任务拆解为清晰、可执行的步骤序列
+- **迭代执行**:基于工具调用完成单步任务,积累执行结果
+- **动态调整**:根据执行进度实时评估是否需要调整计划或终止任务
+- **模型与工具无关**:兼容任意支持工具调用的模型,可灵活集成外部工具
+
+### Plan-Execute Agent 结构
+
+Plan-Execute Agent 由三个核心智能体与一个协调器构成,各组件职责如下:
+
+#### 1. Planner
+
+- **核心功能**:根据用户目标生成初始任务计划(结构化步骤序列)
+- **实现方式**:
+ - 基于工具调用模型(如 GPT-4),通过 `PlanTool` 生成符合 JSON Schema 的步骤列表
+ - 或直接使用支持结构化输出的模型,直接生成 `Plan` 格式结果
+- **输出**:`Plan` 对象(包含有序步骤列表),存储于 Session 中供后续流程使用
+
+```go
+// PlannerConfig provides configuration options for creating a planner agent.
+// There are two ways to configure the planner to generate structured Plan output:
+// 1. Use ChatModelWithFormattedOutput: A model already configured to output in the Plan format
+// 2. Use ToolCallingChatModel + ToolInfo: A model that will be configured to use tool calling
+// to generate the Plan structure
+type PlannerConfig struct {
+ // ChatModelWithFormattedOutput is a model pre-configured to output in the Plan format.
+ // This can be created by configuring a model to output structured data directly.
+ // Can refer to https://github.com/cloudwego/eino-ext/blob/main/components/model/openai/examples/structured/structured.go.
+ ChatModelWithFormattedOutput model.BaseChatModel
+
+ // ToolCallingChatModel is a model that supports tool calling capabilities.
+ // When provided along with ToolInfo, the model will be configured to use tool calling
+ // to generate the Plan structure.
+ ToolCallingChatModel model.ToolCallingChatModel
+ // ToolInfo defines the schema for the Plan structure when using tool calling.
+ // If not provided, PlanToolInfo will be used as the default.
+ ToolInfo *schema.ToolInfo
+
+ // GenInputFn is a function that generates the input messages for the planner.
+ // If not provided, defaultGenPlannerInputFn will be used as the default.
+ GenInputFn GenPlannerInputFn
+
+ // NewPlan creates a new Plan instance for JSON.
+ // The returned Plan will be used to unmarshal the model-generated JSON output.
+ // If not provided, defaultNewPlan will be used as the default.
+ NewPlan NewPlan
+}
+```
+
+#### 2. Executor
+
+- **核心功能**:执行计划中的首个步骤,调用外部工具完成具体任务
+- **实现方式**:基于 `ChatModelAgent` 实现,配置工具集(如搜索、计算、数据库访问等)
+- **工作流**:
+ - 从 Session 中获取当前 `Plan` 和已执行步骤
+ - 提取计划中的第一个未执行步骤作为目标
+ - 调用工具执行该步骤,将结果存储于 Session
+- **关键能力**:支持多轮工具调用(通过 `MaxIterations` 控制),确保单步任务完成
+
+```go
+// ExecutorConfig provides configuration options for creating a executor agent.
+type ExecutorConfig struct {
+ // Model is the chat model used by the executor.
+ Model model.ToolCallingChatModel
+
+ // ToolsConfig is the tools configuration used by the executor.
+ ToolsConfig adk.ToolsConfig
+
+ // MaxIterations defines the upper limit of ChatModel generation cycles.
+ // The agent will terminate with an error if this limit is exceeded.
+ // Optional. Defaults to 20.
+ MaxIterations int
+
+ // GenInputFn is the function that generates the input messages for the Executor.
+ // Optional. If not provided, defaultGenExecutorInputFn will be used.
+ GenInputFn GenPlanExecuteInputFn
+}
+```
+
+#### 3. Replanner
+
+- **核心功能**:评估执行进度,决定继续执行(生成新计划)或终止任务(返回结果)
+- **实现方式**:基于工具调用模型,通过 `PlanTool`(生成新计划)或 `RespondTool`(返回结果)输出决策
+- **决策逻辑**:
+ - **继续执行**:若目标未达成,生成包含剩余步骤的新计划,更新 Session 中的 `Plan`
+ - **终止任务**:若目标已达成,调用 `RespondTool` 生成最终用户响应
+
+```go
+type ReplannerConfig struct {
+
+ // ChatModel is the model that supports tool calling capabilities.
+ // It will be configured with PlanTool and RespondTool to generate updated plans or responses.
+ ChatModel model.ToolCallingChatModel
+
+ // PlanTool defines the schema for the Plan tool that can be used with ToolCallingChatModel.
+ // If not provided, the default PlanToolInfo will be used.
+ PlanTool *schema.ToolInfo
+
+ // RespondTool defines the schema for the response tool that can be used with ToolCallingChatModel.
+ // If not provided, the default RespondToolInfo will be used.
+ RespondTool *schema.ToolInfo
+
+ // GenInputFn is the function that generates the input messages for the Replanner.
+ // if not provided, buildDefaultReplannerInputFn will be used.
+ GenInputFn GenPlanExecuteInputFn
+
+ // NewPlan creates a new Plan instance.
+ // The returned Plan will be used to unmarshal the model-generated JSON output from PlanTool.
+ // If not provided, defaultNewPlan will be used as the default.
+ NewPlan NewPlan
+}
+```
+
+#### 4. PlanExecuteAgent
+
+- **核心功能**:组合上述三个智能体,形成「规划 → 执行 → 重规划」的循环工作流
+- **实现方式**:通过 `SequentialAgent` 和 `LoopAgent` 组合:
+ - 外层 `SequentialAgent`:先执行 `Planner` 生成初始计划,再进入执行-重规划循环
+ - 内层 `LoopAgent`:循环执行 `Executor` 和 `Replanner`,直至任务完成或达到最大迭代次数
+
+```go
+// NewPlanExecuteAgent creates a new plan execute agent with the given configuration.
+func NewPlanExecuteAgent(ctx context.Context, cfg *PlanExecuteConfig) (adk.Agent, error)
+
+// PlanExecuteConfig provides configuration options for creating a plan execute agent.
+type PlanExecuteConfig struct {
+ Planner adk.Agent
+ Executor adk.Agent
+ Replanner adk.Agent
+ MaxIterations int
+}
+```
+
+### Plan-Execute Agent 运行流程
+
+Plan-Execute Agent 的完整工作流程如下:
+
+1. **初始化**:用户输入目标任务,启动 `PlanExecuteAgent`
+2. **规划阶段**:
+ - `Planner` 接收用户目标,生成初始 `Plan`(步骤列表)
+ - `Plan` 存储于 Session(`PlanSessionKey`)
+3. **执行-重规划循环**(由 `LoopAgent` 控制):
+ - **执行步骤**:`Executor` 从 `Plan` 中提取首个步骤,调用工具执行,结果存入 Session(`ExecutedStepsSessionKey`)
+ - **反思步骤**:`Replanner` 评估已执行步骤与结果:
+ - 若目标达成:调用 `RespondTool` 生成最终响应,退出循环
+ - 若需继续:生成新 `Plan` 并更新 Session,进入下一轮循环
+4. **终止条件**:任务完成(`Replanner` 返回结果)或达到最大迭代次数(`MaxIterations`)
+
+## Plan-Execute Agent 使用示例
+
+### 场景说明
+
+1. **Planner**:为【生成旅游计划】这个目标规划详细步骤
+2. **Executor**:使用多种工具(天气查询、航班酒店搜索、目的地吸引力)执行计划。允许在用户请求不明确或缺乏执行所需要的信息时,要求用户输入澄清来补充信息(Human in the loop 场景)
+3. **Replanner**:评估执行结果,若信息不足则调整计划,否则生成最终总结
+
+### 代码实现
+
+#### 1. 初始化模型与工具
+
+```go
+// 初始化支持工具调用的 OpenAI 模型
+func newToolCallingModel(ctx context.Context) model.ToolCallingChatModel {
+ cm, err := openai.NewChatModel(ctx, &openai.ChatModelConfig{
+ APIKey: os.Getenv("OPENAI_API_KEY"),
+ Model: "gpt-4o", // 需支持工具调用
+ })
+ if err != nil {
+ log.Fatalf("初始化模型失败: %v", err)
+ }
+ return cm.(model.ToolCallingChatModel)
+}
+
+// 初始化搜索工具(用于 Executor 调用)
+func newSearchTool(ctx context.Context) tool.BaseTool {
+ config := &duckduckgo.Config{
+ MaxResults: 20, // Limit to return 20 results
+ Region: duckduckgo._RegionWT_,
+ Timeout: 10 * time._Second_,
+ }
+ tool, err := duckduckgo.NewTextSearchTool(ctx, config)
+ if err != nil {
+ log.Fatalf("初始化搜索工具失败: %v", err)
+ }
+ return tool
+}
+```
+
+#### 2. 创建 Planner(规划器)
+
+```go
+func newPlanner(ctx context.Context, model model.ToolCallingChatModel) adk.Agent {
+ planner, err := planexecute.NewPlanner(ctx, &planexecute.PlannerConfig{
+ ToolCallingChatModel: model, // 使用工具调用模型生成计划
+ ToolInfo: &planexecute.PlanToolInfo, // 默认 Plan 工具 schema
+ })
+ if err != nil {
+ log.Fatalf("创建 Planner 失败: %v", err)
+ }
+ return planner
+}
+```
+
+#### 3. 创建 Executor(执行器)
+
+```go
+func newExecutor(ctx context.Context, model model.ToolCallingChatModel) adk.Agent {
+ // 配置 Executor 工具集(仅包含搜索工具)
+ toolsConfig := adk.ToolsConfig{
+ ToolsNodeConfig: compose.ToolsNodeConfig{
+ Tools: []tool.BaseTool{newSearchTool(ctx)},
+ },
+ }
+ executor, err := planexecute.NewExecutor(ctx, &planexecute.ExecutorConfig{
+ Model: model,
+ ToolsConfig: toolsConfig,
+ MaxIterations: 5, // ChatModel 最多运行 5 次
+ })
+ if err != nil {
+ log.Fatalf("创建 Executor 失败: %v", err)
+ }
+ return executor
+}
+```
+
+#### 4. 创建 Replanner(重规划器)
+
+```go
+func newReplanner(ctx context.Context, model model.ToolCallingChatModel) adk.Agent {
+ replanner, err := planexecute.NewReplanner(ctx, &planexecute.ReplannerConfig{
+ ChatModel: model, // 使用工具调用模型评估进度
+ })
+ if err != nil {
+ log.Fatalf("创建 Replanner 失败: %v", err)
+ }
+ return replanner
+}
+```
+
+#### 5. 组合为 PlanExecuteAgent
+
+```go
+func newPlanExecuteAgent(ctx context.Context) adk.Agent {
+ model := newToolCallingModel(ctx)
+
+ // 实例化三大核心智能体
+ planner := newPlanner(ctx, model)
+ executor := newExecutor(ctx, model)
+ replanner := newReplanner(ctx, model)
+
+ // 组合为 PlanExecuteAgent(固定 execute - replan 最大迭代 10 次)
+ planExecuteAgent, err := planexecute.NewPlanExecuteAgent(ctx, &planexecute.PlanExecuteConfig{
+ Planner: planner,
+ Executor: executor,
+ Replanner: replanner,
+ MaxIterations: 10,
+ })
+ if err != nil {
+ log.Fatalf("组合 PlanExecuteAgent 失败: %v", err)
+ }
+ return planExecuteAgent
+}
+```
+
+#### 6. 运行与输出
+
+```go
+import (
+ "context"
+ "log"
+ "os"
+ "time"
+
+ "github.com/cloudwego/eino-ext/components/model/openai"
+ "github.com/cloudwego/eino-ext/components/tool/duckduckgo/v2"
+ "github.com/cloudwego/eino/adk"
+ "github.com/cloudwego/eino/adk/prebuilt/planexecute"
+ "github.com/cloudwego/eino/components/model"
+ "github.com/cloudwego/eino/components/tool"
+ "github.com/cloudwego/eino/compose"
+ "github.com/cloudwego/eino/schema"
+)
+
+func main() {
+ ctx := context.Background()
+ agent := newPlanExecuteAgent(ctx)
+
+ // 创建 Runner 执行智能体
+ runner := adk.NewRunner(ctx, adk.RunnerConfig{Agent: agent, EnableStreaming: true})
+
+ // 用户输入目标任务
+ userInput := []adk.Message{
+ schema.UserMessage("Research and summarize the latest developments in AI for healthcare in 2024, including key technologies, applications, and industry trends."),
+ }
+
+ // 执行并打印结果
+ events := runner.Run(ctx, userInput)
+ for {
+ event, ok := events.Next()
+ if !ok {
+ break
+ }
+ if event.Err != nil {
+ log.Printf("执行错误: %v", event.Err)
+ break
+ }
+ // 打印智能体输出(计划、执行结果、最终响应等)
+ if msg, err := event.Output.MessageOutput.GetMessage(); err == nil && msg.Content != "" {
+ log.Printf("\n=== Agent Output ===\n%s\n", msg.Content)
+ }
+ }
+}
+```
+
+### 运行结果
+
+```markdown
+2025/09/08 11:47:42
+=== Agent:Planner Output ===
+{"steps":["Identify the most recent and credible sources for AI developments in healthcare in 2024, such as scientific journals, industry reports, news articles, and expert analyses.","Extract and compile the key technologies emerging or advancing in AI for healthcare in 2024, including machine learning models, diagnostic tools, robotic surgery, personalized medicine, and data management solutions.","Analyze the main applications of AI in healthcare during 2024, focusing on areas such as diagnostics, patient care, drug discovery, medical imaging, and healthcare administration.","Investigate current industry trends related to AI in healthcare for 2024, including adoption rates, regulatory changes, ethical considerations, funding landscape, and market forecasts.","Synthesize the gathered information into a comprehensive summary covering the latest developments in AI for healthcare in 2024, highlighting key technologies, applications, and industry trends with examples and implications."]}
+2025/09/08 11:47:47
+=== Agent:Executor Output ===
+{"message":"Found 10 results successfully.","results":[{"title":"Artificial Intelligence in Healthcare: 2024 Year in Review","url":"https://www.researchgate.net/publication/389402322_Artificial_Intelligence_in_Healthcare_2024_Year_in_Review","summary":"The adoption of LLMs and text data types amongst various healthcare specialties, especially for education and administrative tasks, is unlocking new potential for AI applications in..."},{"title":"AI in Healthcare - Nature","url":"https://www.nature.com/collections/hacjaaeafj","summary":"\"AI in Healthcare\" encompasses the use of AI technologies to enhance various aspects of healthcare delivery, from diagnostics to treatment personalization, ultimately aiming to improve..."},{"title":"Evolution of artificial intelligence in healthcare: a 30-year ...","url":"https://www.frontiersin.org/journals/medicine/articles/10.3389/fmed.2024.1505692/full","summary":"Conclusion: This study reveals a sustained explosive growth trend in AI technologies within the healthcare sector in recent years, with increasingly profound applications in medicine. Additionally, medical artificial intelligence research is dynamically evolving with the advent of new technologies."},{"title":"The Impact of Artificial Intelligence on Healthcare: A Comprehensive ...","url":"https://onlinelibrary.wiley.com/doi/full/10.1002/hsr2.70312","summary":"This review analyzes the impact of AI on healthcare using data from the Web of Science (2014-2024), focusing on keywords like AI, ML, and healthcare applications."},{"title":"Artificial intelligence in healthcare (Review) - PubMed","url":"https://pubmed.ncbi.nlm.nih.gov/39583770/","summary":"Furthermore, the barriers and constraints that may impede the use of AI in healthcare are outlined, and the potential future directions of AI-augmented healthcare systems are discussed."},{"title":"Full article: Towards new frontiers of healthcare systems research ...","url":"https://www.tandfonline.com/doi/full/10.1080/20476965.2024.2402128","summary":"In this editorial, we begin by taking a quick look at the recent past of AI and its use in health. We then present the current landscape of AI research in health. We further discuss promising avenues for novel innovations in health systems research."},{"title":"AI in healthcare: New research shows promise and limitations of ...","url":"https://www.sciencedaily.com/releases/2024/10/241028164534.htm","summary":"Researchers have studied how well doctors used GPT-4 -- an artificial intelligence (AI) large language model system -- for diagnosing patients."},{"title":"Artificial Intelligence in Healthcare: 2024 Year in Review","url":"https://www.medrxiv.org/content/10.1101/2025.02.26.25322978v2","summary":"The adoption of LLMs and text data types amongst various healthcare specialties, especially for education and administrative tasks, is unlocking new potential for AI applications in healthcare."},{"title":"Investigating the Key Trends in Applying Artificial Intelligence to ...","url":"https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0322197","summary":"The findings of this review are useful for healthcare professionals to acquire deeper knowledge on the use of medical AI from design to implementation stage. However, a thorough assessment is essential to gather more insights into whether AI benefits outweigh its risks."},{"title":"Revolutionizing healthcare and medicine: The impact of modern ...","url":"https://pubmed.ncbi.nlm.nih.gov/39479277/","summary":"Wearable technology, the Internet of Medical Things, and sensor technologies have empowered individuals to take an active role in tracking and managing their health. These devices facilitate real-time data collection, enabling preventive and personalized care."}]}
+2025/09/08 11:47:52
+=== Agent:Executor Output ===
+{"message":"Found 10 results successfully.","results":[{"title":"Generative AI in healthcare: Current trends and future outlook","url":"https://www.mckinsey.com/industries/healthcare/our-insights/generative-ai-in-healthcare-current-trends-and-future-outlook","summary":"The latest survey, conducted in the fourth quarter of 2024, found that 85 percent of respondents—healthcare leaders from payers, health systems, and healthcare services and technology (HST) groups—were exploring or had already adopted gen AI capabilities."},{"title":"AI in healthcare - statistics & facts | Statista","url":"https://www.statista.com/topics/10011/ai-in-healthcare/","summary":"Distribution of confidence in using a new technology and AI in healthcare among health professionals in Denmark, France, Germany, and the United Kingdom as of 2024"},{"title":"Medscape and HIMSS Release 2024 Report on AI Adoption in Healthcare","url":"https://www.prnewswire.com/news-releases/medscape-and-himss-release-2024-report-on-ai-adoption-in-healthcare-302324936.html","summary":"The full \"AI Adoption in Healthcare Report 2024\" is now available on both Medscape and HIMSS websites offering detailed analysis and insights into the current state of AI adoption in..."},{"title":"AI in Healthcare Market Size, Share | Growth Report [2025-2032]","url":"https://www.fortunebusinessinsights.com/industry-reports/artificial-intelligence-in-healthcare-market-100534","summary":"The global AI in healthcare market research report delivers an in-depth market analysis, highlighting essential elements such as an overview of advanced technologies, the regulatory landscape in key countries, and the challenges encountered in adopting and implementing AI-based solutions."},{"title":"Artificial Intelligence in Healthcare Market Size to Hit USD 613.81 Bn ...","url":"https://www.precedenceresearch.com/artificial-intelligence-in-healthcare-market","summary":"The global artificial intelligence (AI) in healthcare market size reached USD 26.69 billion in 2024 and is projected to hit around USD 613.81 billion by 2034, at a CAGR of 36.83%."},{"title":"AI In Healthcare Market Size, Share | Industry Report, 2033","url":"https://www.globalmarketstatistics.com/market-reports/artificial-intelligence-in-healthcare-market-12394","summary":"Market Size and Growth: The Artificial Intelligence in Healthcare Market Market size was USD 5011.24 Million in 2024, is projected to grow to USD 5762.41 Million by 2025 and exceed USD 8966.05 Million by 2033, with a CAGR of 21.4% from 2025-2033."},{"title":"AI in healthcare statistics: 62 findings from 18 research reports - Keragon","url":"https://www.keragon.com/blog/ai-in-healthcare-statistics","summary":"Bringing together the data — 12 data-driven insights from 6 different research reports — we revealed a range of concerns surrounding AI in healthcare. The key obstacles the data unpacked are misdiagnoses, transparency, data accuracy, and human oversight."},{"title":"AI in Healthcare Statistics By Market Share And Technology","url":"https://www.sci-tech-today.com/stats/ai-in-healthcare-statistics/","summary":"According to AI in Healthcare Statistics, the US will lead the global AI healthcare market in 2024, which is projected to reach USD 24.7 billion. In the same year, US healthcare AI..."},{"title":"Artificial Intelligence in Healthcare: 2024 Year in Review","url":"https://www.researchgate.net/publication/389402322_Artificial_Intelligence_in_Healthcare_2024_Year_in_Review","summary":"The adoption of LLMs and text data types amongst various healthcare specialties, especially for education and administrative tasks, is unlocking new potential for AI applications in..."},{"title":"19+ AI in Healthcare Statistics for 2024: Insights & Projections","url":"https://www.allaboutai.com/resources/ai-statistics/healthcare/","summary":"Discover 19+ AI in healthcare statistics for 2024, covering public perception, market trends, and revenue projections with expert insights."}]}
+2025/09/08 11:47:58
+=== Agent:Executor Output ===
+{"message":"Found 10 results successfully.","results":[{"title":"Artificial Intelligence in Healthcare: 2024 Year in Review","url":"https://www.researchgate.net/publication/389402322_Artificial_Intelligence_in_Healthcare_2024_Year_in_Review","summary":"The adoption of LLMs and text data types amongst various healthcare specialties, especially for education and administrative tasks, is unlocking new potential for AI applications in..."},{"title":"Trustworthy AI in Healthcare Insights from IQVIA 2024 Report","url":"https://aipressroom.com/trustworthy-ai-healthcare-insights-iqvia-2024/","summary":"Discover how AI is advancing healthcare with trusted frameworks, real-world impact, and strategies for ethical, scalable adoption."},{"title":"The Impact of Artificial Intelligence on Healthcare: A Comprehensive ...","url":"https://onlinelibrary.wiley.com/doi/full/10.1002/hsr2.70312","summary":"This review analyzes the impact of AI on healthcare using data from the Web of Science (2014-2024), focusing on keywords like AI, ML, and healthcare applications."},{"title":"Generative AI in Healthcare: 2024's Breakthroughs and What's Next for ...","url":"https://www.signifyresearch.net/insights/generative-ai-news-round-up-december-2024/","summary":"As 2024 draws to a close, generative AI in healthcare has achieved remarkable milestones. This year has been defined by both groundbreaking innovation and insightful exploration, with AI transforming workflows in medical imaging and elevating patient care across digital health solutions."},{"title":"Generative AI in healthcare: Current trends and future outlook","url":"https://www.mckinsey.com/industries/healthcare/our-insights/generative-ai-in-healthcare-current-trends-and-future-outlook","summary":"The latest survey, conducted in the fourth quarter of 2024, found that 85 percent of respondents—healthcare leaders from payers, health systems, and healthcare services and technology (HST) groups—were exploring or had already adopted gen AI capabilities."},{"title":"Artificial Intelligence in Healthcare: 2024 Year in Review","url":"https://www.medrxiv.org/content/10.1101/2025.02.26.25322978v2","summary":"The adoption of LLMs and text data types amongst various healthcare specialties, especially for education and administrative tasks, is unlocking new potential for AI applications in healthcare."},{"title":"How AI is improving diagnostics and health outcomes","url":"https://www.weforum.org/stories/2024/09/ai-diagnostics-health-outcomes/","summary":"By leveraging the power of AI for diagnostics, we can improve health outcomes and contribute to a future where healthcare is more accessible and effective for everyone, particularly in the communities that need it the most."},{"title":"Artificial Intelligence in Healthcare: 2024 Developments and Lega","url":"https://natlawreview.com/article/healthy-ai-2024-year-review","summary":"This publication provides an overview of important developments at the intersection of AI, healthcare and the law in 2024."},{"title":"What's next in AI and healthcare? | McKinsey & Company","url":"https://www.mckinsey.com/featured-insights/themes/whats-next-in-ai-and-healthcare","summary":"In healthcare—with patient well-being and lives at stake—the advancement of AI seems particularly momentous. In an industry battling staffing shortages and increasing costs, health system leaders need to consider all possible solutions, including AI technologies."},{"title":"AI in Healthcare: An Expert Analysis on Driving Transformational ...","url":"https://www.historytools.org/ai/healthcare-ai","summary":"Artificial intelligence (AI) has emerged as a disruptive force across industries, but few sectors are seeing more dramatic change than healthcare. Fueled by vast data growth, urgent cost pressures and new technological capabilities, AI adoption in health is accelerating rapidly."}]}
+2025/09/08 11:48:01
+=== Agent:Executor Output ===
+{"message":"Found 10 results successfully.","results":[{"title":"Deep Dive: AI 2024 | pharmaphorum","url":"https://pharmaphorum.com/digital/deep-dive-ai-2024","summary":"In this issue, we delve into the transformative impact of AI on healthcare and pharma, featuring insights on key AI trends from the floor of Frontiers Health, the ongoing battle against..."},{"title":"Artificial Intelligence - Healthcare IT News","url":"https://www.healthcareitnews.com/topics/artificial-intelligence","summary":"Dr. Ethan Goh, executive director of Stanford ARISE, the AI Research and Science Evaluation Network, describes a new study to explore models' diagnostic and management reasoning capabilities - and what that could mean for clinicians and patients."},{"title":"7 ways AI is transforming healthcare | World Economic Forum","url":"https://www.weforum.org/stories/2025/08/ai-transforming-global-health/","summary":"While healthcare lags in AI adoption, these game-changing innovations - from spotting broken bones to assessing ambulance needs - show what's possible."},{"title":"Artificial Intelligence (AI) in Health Care | NEJM Catalyst","url":"https://catalyst.nejm.org/browse/catalyst-topic/ai-in-healthcare","summary":"As AI technology rapidly evolves, health care professionals grapple with the ethical implications of data ownership, privacy concerns, and the actionable insights derived from AI."},{"title":"From Robots to Healthcare: The Real Story Behind 2024's AI Investments","url":"https://www.algorithm-research.com/post/from-robots-to-healthcare-the-real-story-behind-2024-s-ai-investments","summary":"AI continues to reshape industries across the globe, with capital flowing into areas that promise the highest long-term impact. According to the 2025 AI Index Report by Stanford University, global AI investments in 2024 reached new highs, but they were far from evenly distributed."},{"title":"2024 Medical Breakthroughs Revolutionizing Healthcare","url":"https://medicalnewscorner.com/2024-medical-breakthroughs-revolutionizing-healthcare/","summary":"The medical field is set for transformative advancements in 2024, with breakthroughs in gene editing, cancer treatment, artificial intelligence, telemedicine, mental health, and wearable technology, promising to enhance patient care and outcomes globally."},{"title":"Artificial Intelligence - JAMA Network","url":"https://jamanetwork.com/collections/44024/artificial-intelligence","summary":"Explore the latest in AI in medicine, including studies of how chatbots, large language models (LLMs), natural language processing, and machine learning are transforming medicine and health care."},{"title":"Ai医疗技术:2024年及以后的发展趋势-家医大健康","url":"https://www.familydoctor.cn/news/ai-yiliao-jishu-yihou-fazhanqushi-192483.html","summary":"本文深入探讨了新一代健康AI技术在2024年的发展前景,包括先进诊断工具和个性化治疗计划等创新应用。 文章指出,通过机器学习和深度学习技术的突破,AI将在疾病早期检测、患者数据实时分析和医疗资源优化分配方面发挥关键作用。"},{"title":"AI in Healthcare | Artificial intelligence in healthcare news","url":"https://aiin.healthcare/","summary":"AI in Healthcare is the leading source of information on the latest developments in the use of artificial intelligence in healthcare. We provide coverage of AI-powered medical devices, software, and algorithms, as well as the ethical and regulatory challenges surrounding the use of AI in healthcare."},{"title":"19+ AI in Healthcare Statistics for 2024: Insights & Projections","url":"https://www.allaboutai.com/resources/ai-statistics/healthcare/","summary":"Discover 19+ AI in healthcare statistics for 2024, covering public perception, market trends, and revenue projections with expert insights."}]}
+2025/09/08 11:48:08
+=== Agent:Executor Output ===
+Here are some of the most recent and credible sources identified for AI developments in healthcare in 2024:
+
+Scientific Journals:
+- "Artificial Intelligence in Healthcare: 2024 Year in Review" (ResearchGate)
+- "AI in Healthcare - Nature" (nature.com collection)
+- "Evolution of artificial intelligence in healthcare: a 30-year study" (frontiersin.org)
+- "The Impact of Artificial Intelligence on Healthcare: A Comprehensive Review" (Wiley online library)
+- "Artificial intelligence in healthcare (Review)" (PubMed)
+- "Artificial Intelligence - JAMA Network" (jamanetwork.com collection)
+
+Industry Reports:
+- "Generative AI in healthcare: Current trends and future outlook" (McKinsey report, Q4 2024)
+- "Medscape and HIMSS Release 2024 Report on AI Adoption in Healthcare"
+- "AI in Healthcare Market Size, Share | Growth Report [2025-2032]" (Fortune Business Insights)
+- "Artificial Intelligence in Healthcare Market Size to Hit USD 613.81 Bn" (Precedence Research)
+
+News Articles:
+- "AI in healthcare: New research shows promise and limitations of GPT-4" (ScienceDaily, Oct 2024)
+- "Deep Dive: AI 2024" (pharmaphorum.com)
+- "Artificial Intelligence - Healthcare IT News"
+- "7 ways AI is transforming healthcare" (World Economic Forum, 2024)
+- "2024 Medical Breakthroughs Revolutionizing Healthcare" (medicalnewscorner.com)
+
+Expert Analyses:
+- "Trustworthy AI in Healthcare Insights from IQVIA 2024 Report"
+- "Generative AI in Healthcare: 2024's Breakthroughs and What's Next" (SignifyResearch)
+- "AI in Healthcare: An Expert Analysis on Driving Transformational Change"
+
+These sources cover a broad spectrum including peer-reviewed journals, authoritative market research reports, reputable news publications, and expert thought leadership on the latest AI innovations, applications, and trends in healthcare for 2024. Shall I proceed to extract and compile key technologies emerging or advancing in AI for healthcare in 2024 from these sources?
+2025/09/08 11:48:15
+=== Agent:Replanner Output ===
+{"steps":["Extract and compile the key technologies emerging or advancing in AI for healthcare in 2024, focusing on machine learning models, diagnostic tools, robotic surgery, personalized medicine, and data management solutions.","Analyze the main applications of AI in healthcare during 2024, concentrating on diagnostics, patient care, drug discovery, medical imaging, and healthcare administration.","Investigate current industry trends related to AI in healthcare for 2024, including adoption rates, regulatory changes, ethical considerations, funding landscape, and market forecasts.","Synthesize the gathered information into a comprehensive summary covering the latest developments in AI for healthcare in 2024, highlighting key technologies, applications, and industry trends with examples and implications."]}
+2025/09/08 11:48:20
+=== Agent:Executor Output ===
+{"message":"Found 10 results successfully.","results":[{"title":"Five Machine Learning Innovations Shaping Healthcare in 2024","url":"https://healthmanagement.org/c/artificial-intelligence/News/five-machine-learning-innovations-shaping-healthcare-in-2024","summary":"Discover 5 AI & ML trends transforming UK healthcare, from explainable AI to edge AI, enhancing patient care and operational efficiency."},{"title":"How AI is improving diagnostics and health outcomes","url":"https://www.weforum.org/stories/2024/09/ai-diagnostics-health-outcomes/","summary":"Effective and ethical AI solutions in diagnostics require collaboration. Artificial intelligence (AI) is transforming healthcare by improving diagnostic accuracy, enabling earlier disease detection and enhancing patient outcomes."},{"title":"The Impact of Artificial Intelligence on Healthcare: A Comprehensive ...","url":"https://onlinelibrary.wiley.com/doi/full/10.1002/hsr2.70312","summary":"The study aims to describe AI in healthcare, including important technologies like robotics, machine learning (ML), deep learning (DL), and natural language processing (NLP), and to investigate how these technologies are used in patient interaction, predictive analytics, and remote monitoring."},{"title":"Unveiling the potential of artificial intelligence in revolutionizing ...","url":"https://eurjmedres.biomedcentral.com/articles/10.1186/s40001-025-02680-7","summary":"The rapid advancement of Machine Learning (ML) and Deep Learning (DL) technologies has revolutionized healthcare, particularly in the domains of disease prediction and diagnosis."},{"title":"Trends in AI for Disease and Diagnostic Prediction: A Healthcare ...","url":"https://link.springer.com/chapter/10.1007/978-3-031-84404-1_5","summary":"This chapter explores the transformative impact of artificial intelligence (AI) on the healthcare system, particularly in enhancing the accuracy, efficiency, and speed of disease diagnostics. A key advantage of AI integration in healthcare lies in its capacity to..."},{"title":"Unleashing the potential of AI in modern healthcare: Machine learning ...","url":"https://www.researchgate.net/publication/385135063_Unleashing_the_potential_of_AI_in_modern_healthcare_Machine_learning_algorithms_and_intelligent_medical_robots","summary":"Overall, AI, through machine learning algorithms and intelligent medical robots, is revolutionizing healthcare by offering promising improvements in diagnostics, surgical precision,..."},{"title":"The impact of artificial intelligence on remote healthcare: Enhancing ...","url":"https://www.sciencedirect.com/science/article/pii/S2949866X24001230","summary":"Key applications of AI, such as AI-enabled diagnostic systems, predictive analytics, and teleconsultation platforms, are reviewed for their strengths in overcoming the limitations of the traditional models of remote healthcare."},{"title":"15 AI Tools Transforming Healthcare in 2025 | The Future of Medicine","url":"https://healthcarereaders.com/insights/top-ai-tools-in-healthcare","summary":"From predictive analytics to generative AI (GenAI), these technologies are improving patient outcomes while reducing costs. AI is not new to healthcare—predictive analytics and machine learning have been used for years in diagnostics, treatment planning, and patient monitoring."},{"title":"How AI Is Transforming Healthcare: Diagnosis, Treatment, and Beyond","url":"https://www.sciencenewstoday.org/how-ai-is-transforming-healthcare-diagnosis-treatment-and-beyond","summary":"In this sweeping journey into the AI-healthcare nexus, we will explore how artificial intelligence is reshaping diagnosis, revolutionizing treatment, enhancing patient care, and opening doors to possibilities once thought impossible."},{"title":"Top 10 AI in Healthcare Applications 2025 | Diagnosis to Patient Care","url":"https://www.guvi.in/blog/ai-in-healthcare-applications/","summary":"What is Artificial intelligence in Healthcare? AI in healthcare refers to the use of advanced algorithms and machine learning models that analyze complex medical data and support clinical decisions. It helps doctors identify patterns in imaging results, predict disease risks through patient records, and recommend treatment options tailored to individual needs. Top 10 Applications of AI in ..."}]}
+2025/09/08 11:48:28
+=== Agent:Executor Output ===
+Key technologies emerging or advancing in AI for healthcare in 2024, based on recent findings, include:
+
+1. Machine Learning Models:
+- Advanced machine learning (ML) and deep learning (DL) technologies are revolutionizing disease prediction, diagnostics accuracy, and patient outcome improvements.
+- Innovations include explainable AI models that enhance transparency and trust in clinical decision-making.
+- Edge AI is gaining traction, enabling real-time data processing at healthcare facilities for faster diagnostics and response.
+
+2. Diagnostic Tools:
+- AI-enabled diagnostic systems are enhancing speed, accuracy, and early detection of diseases.
+- Integration of AI in medical imaging allows for better identification of abnormalities and tailored treatment plans.
+- Use of natural language processing (NLP) to analyze patient records and literature supports predictive analytics and diagnostics.
+
+3. Robotic Surgery:
+- Intelligent medical robots equipped with AI algorithms are improving surgical precision and reducing invasiveness.
+- AI facilitates real-time guidance and adaptive control during surgeries, increasing safety and effectiveness.
+
+4. Personalized Medicine:
+- AI models analyze individual patient data to recommend customized treatment plans.
+- Predictive analytics support identification of patient-specific risk factors and therapeutic responses.
+- AI-driven genomics and biomarker analysis accelerate personalized drug development.
+
+5. Data Management Solutions:
+- AI-powered data management platforms enable integration and analysis of large-scale heterogeneous healthcare data.
+- Predictive analytics and remote monitoring systems optimize patient care and hospital operations.
+- Secure and compliant AI solutions address privacy and ethical concerns in managing healthcare information.
+
+These technologies collectively contribute to enhancing diagnostics, treatment precision, patient care, and operational efficiency in healthcare settings in 2024. Would you like me to proceed with analyzing the main applications of AI in healthcare during 2024 next?
+2025/09/08 11:48:33
+=== Agent:Replanner Output ===
+{"steps":["Analyze the main applications of AI in healthcare during 2024, concentrating on diagnostics, patient care, drug discovery, medical imaging, and healthcare administration.","Investigate current industry trends related to AI in healthcare for 2024, including adoption rates, regulatory changes, ethical considerations, funding landscape, and market forecasts.","Synthesize the gathered information into a comprehensive summary covering the latest developments in AI for healthcare in 2024, highlighting key technologies, applications, and industry trends with examples and implications."]}
+2025/09/08 11:48:39
+=== Agent:Executor Output ===
+{"message":"Found 10 results successfully.","results":[{"title":"How AI is improving diagnostics and health outcomes","url":"https://www.weforum.org/stories/2024/09/ai-diagnostics-health-outcomes/","summary":"By leveraging the power of AI for diagnostics, we can improve health outcomes and contribute to a future where healthcare is more accessible and effective for everyone, particularly in the communities that need it the most."},{"title":"14 Top Use Cases for AI in Healthcare in 2024","url":"https://www.cake.ai/blog/top-ai-healthcare-use-cases","summary":"We will explore the 14 top use cases for AI in healthcare, demonstrating how these technologies are improving patient outcomes and streamlining operations from the front desk to the operating room."},{"title":"Artificial Intelligence (AI) Applications in Drug Discovery and Drug ...","url":"https://pubmed.ncbi.nlm.nih.gov/39458657/","summary":"In this review article, we will present a comprehensive overview of AI's applications in the pharmaceutical industry, covering areas such as drug discovery, target optimization, personalized medicine, drug safety, and more."},{"title":"Top 10 AI in Healthcare Applications 2025 | Diagnosis to Patient Care","url":"https://www.guvi.in/blog/ai-in-healthcare-applications/","summary":"Unravel the top 10 AI in healthcare applications transforming 2025, from diagnosis accuracy to patient care, drug discovery, monitoring, and cost reduction."},{"title":"AI in Healthcare: Enhancing Patient Care and Diagnosis","url":"https://www.park.edu/blog/ai-in-healthcare-enhancing-patient-care-and-diagnosis/","summary":"Below, we delve into the various applications of AI in healthcare and examine how it enhances patient care and diagnosis — along with the challenges and opportunities that lie ahead."},{"title":"Generative Artificial Intelligence in Healthcare: Applications ...","url":"https://www.mdpi.com/2673-7426/5/3/37","summary":"These generative AI models have shown widespread applications in clinical practice and research. Such applications range from medical documentation and diagnostics to patient communication and drug discovery."},{"title":"The Impact of Artificial Intelligence on Healthcare: A Comprehensive ...","url":"https://onlinelibrary.wiley.com/doi/full/10.1002/hsr2.70312","summary":"Core applications like remote monitoring and predictive analytics improve operational effectiveness and patient involvement. However, there are major obstacles to the mainstream implementation of AI in healthcare, including issues with data security and budget constraints."},{"title":"Reviewing Artificial Intelligence Applications in Healthcare ...","url":"https://worldscientificnews.com/reviewing-artificial-intelligence-applications-in-healthcare-diagnostics-benefits-challenges-and-future-directions/","summary":"Artificial intelligence (AI) transforms healthcare diagnostics by enabling faster, more accurate, cost-effective disease detection and treatment planning. This paper reviews AI's applications, benefits, challenges, and future directions in diagnostics."},{"title":"Artificial Intelligence in Healthcare: Transforming Diagnostics, Drug ...","url":"https://medtechnews.uk/research-reports/artificial-intelligence-in-healthcare-transforming-diagnostics-drug-discovery-and-ethical-considerations/","summary":"This report embarks on a detailed exploration of the diverse and expanding applications of AI in healthcare, with particular emphasis on its transformative contributions to disease diagnostics and the intricate, multi-stage process of drug discovery."},{"title":"AI in Healthcare Applications - admedica.com","url":"https://admedica.com/2024/12/23/real-world-applications-of-ai-in-healthcare/","summary":"From streamlining diagnostics to enhancing patient care, AI is making medical services faster, more accurate, and more personalized. Here's a closer look at how AI is being applied in healthcare, with real-world examples that highlight its impact."}]}
+2025/09/08 11:48:46
+=== Agent:Executor Output ===
+In 2024, AI applications in healthcare are notably advancing across several main areas:
+
+1. Diagnostics:
+- AI-powered diagnostic tools are enabling faster, more accurate, and cost-effective disease detection and treatment planning.
+- Techniques such as machine learning and generative AI improve early detection of conditions and enhance health outcomes.
+- AI supports analysis of medical imaging and patient records for precise diagnostics.
+- Real-world implementations demonstrate improved accessibility and effectiveness, particularly benefiting underserved communities.
+
+2. Patient Care:
+- AI enhances personalized patient care through predictive analytics, remote monitoring, and data integration.
+- Generative AI assists in medical documentation and patient communication, improving care coordination.
+- AI-driven tools help in chronic disease management, patient monitoring, and decision support.
+- These technologies facilitate more tailored, responsive healthcare experiences.
+
+3. Drug Discovery:
+- AI expedites drug discovery processes including target optimization, safety assessment, and personalized medicine development.
+- Generative AI and machine learning models aid in identifying new drug candidates and predicting therapeutic responses.
+- AI contributes to reducing costs and timelines in pharmaceutical research.
+
+4. Medical Imaging:
+- AI integration in medical imaging allows better identification of abnormalities and supports customized treatment planning.
+- Enhanced imaging diagnostics rely on deep learning models interpreting complex data with a higher degree of precision.
+- These advances enable earlier disease detection and more effective interventions.
+
+5. Healthcare Administration:
+- AI applications streamline healthcare operations such as scheduling, resource allocation, and billing.
+- Predictive analytics optimize hospital workflows and patient flow management.
+- AI solutions improve operational efficiency and reduce administrative burdens on healthcare providers.
+
+Overall, these AI applications in 2024 are driving transformative changes in healthcare delivery by improving diagnostic accuracy, enhancing patient care personalization, accelerating drug development, refining medical imaging analysis, and optimizing administrative processes. Would you like me to proceed with investigating the current industry trends related to AI in healthcare for 2024 next?
+2025/09/08 11:48:49
+=== Agent:Replanner Output ===
+{"steps":["Investigate current industry trends related to AI in healthcare for 2024, including adoption rates, regulatory changes, ethical considerations, funding landscape, and market forecasts.","Synthesize the gathered information into a comprehensive summary covering the latest developments in AI for healthcare in 2024, highlighting key technologies, applications, and industry trends with examples and implications."]}
+2025/09/08 11:48:55
+=== Agent:Executor Output ===
+{"message":"Found 10 results successfully.","results":[{"title":"Ethical and legal considerations in healthcare AI: innovation and ...","url":"https://royalsocietypublishing.org/doi/10.1098/rsos.241873","summary":"Artificial intelligence (AI) is transforming healthcare by enhancing diagnostics, personalizing medicine and improving surgical precision. However, its integration into healthcare systems raises significant ethical and legal challenges."},{"title":"Ethical Considerations in AI-Enabled Healthcare","url":"https://link.springer.com/chapter/10.1007/978-3-031-80813-5_18","summary":"Integrating Artificial Intelligence (AI) in healthcare has revolutionized patient care and operational workflows, yet it introduces significant ethical considerations. This chapter explores the impact of AI on key ethical principles—beneficence, nonmaleficence, autonomy, and justice."},{"title":"Ethical implications of AI-driven clinical decision support systems on ...","url":"https://bmcmedethics.biomedcentral.com/articles/10.1186/s12910-024-01151-8","summary":"Artificial intelligence-driven Clinical Decision Support Systems (AI-CDSS) are increasingly being integrated into healthcare for various purposes, including resource allocation. While these systems promise improved efficiency and decision-making, they also raise significant ethical concerns."},{"title":"Ethical debates amidst flawed healthcare artificial intelligence ...","url":"https://www.nature.com/articles/s41746-024-01242-1","summary":"Healthcare AI faces an ethical dilemma between selective and equitable deployment, exacerbated by flawed performance metrics. These metrics inadequately capture real-world complexities and..."},{"title":"Ethical Implications in AI-Based Health Care Decision Making: A ...","url":"https://liebertpub.com/doi/abs/10.1089/aipo.2024.0007","summary":"This critical analysis explores the ethical implications of AI-based health care decision making, examining the existing literature, methodological approaches, and ethical frameworks."},{"title":"AI ethics in medical research: the 2024 Declaration of Helsinki","url":"https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(24)02376-6/fulltext","summary":"The recent update to the World Medical Association's Declaration of Helsinki,1 adopted at the 75th World Medical Association General Assembly in October, 2024, signals yet another milestone in the ongoing effort to safeguard ethical standards in medical research involving human participants."},{"title":"Navigating ethical considerations in the use of artificial intelligence ...","url":"https://pubmed.ncbi.nlm.nih.gov/39545614/","summary":"Results: The review highlighted critical ethical challenges, such as data privacy and security, accountability for AI-driven decisions, transparency in AI decision-making, and maintaining the human touch in care."},{"title":"The 5 Biggest Ethical Issues with AI in Healthcare","url":"https://www.keragon.com/blog/ethical-issues-with-ai-in-healthcare","summary":"What are the ethical issues with AI in healthcare? Dive into complex debates and considerations surrounding the ethical use of healthcare AI."},{"title":"Frontiers | Ethical-legal implications of AI-powered healthcare in ...","url":"https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2025.1619463/full","summary":"It argues that by prioritizing ethical considerations in the development and deployment of AI, medical professionals can enhance health outcomes and cultivate patient trust, thereby bridging the gap between technological advancements and nuanced healthcare realities (Collins et al., 2024)."},{"title":"(PDF) Ethical framework for artificial intelligence in healthcare ...","url":"https://www.researchgate.net/publication/381669447_Ethical_framework_for_artificial_intelligence_in_healthcare_research_A_path_to_integrity","summary":"This article sets out to introduce a detailed framework designed to steer governance and offer a systematic method for assuring that AI applications in healthcare research are developed and..."}]}
+2025/09/08 11:49:01
+=== Agent:Executor Output ===
+{"message":"Found 10 results successfully.","results":[{"title":"Medscape and HIMSS Release 2024 Report on AI Adoption in Healthcare","url":"https://www.prnewswire.com/news-releases/medscape-and-himss-release-2024-report-on-ai-adoption-in-healthcare-302324936.html","summary":"The full \"AI Adoption in Healthcare Report 2024\" is now available on both Medscape and HIMSS websites offering detailed analysis and insights into the current state of AI adoption..."},{"title":"AI in Healthcare Statistics 2025: Overview of Trends","url":"https://docus.ai/blog/ai-healthcare-statistics","summary":"As we step into 2025, let's see how AI in healthcare statistics from 2024 are shaping trends in patient care, diagnostics, and innovation."},{"title":"AI in healthcare - statistics & facts | Statista","url":"https://www.statista.com/topics/10011/ai-in-healthcare/","summary":"Distribution of confidence in using a new technology and AI in healthcare among health professionals in Denmark, France, Germany, and the United Kingdom as of 2024"},{"title":"AI In Healthcare Stats 2025: Adoption, Accuracy & Market","url":"https://www.demandsage.com/ai-in-healthcare-stats/","summary":"Get insights into AI in healthcare stats, including adoption rate, performance accuracy, and the rapidly growing market valuation."},{"title":"HIMSS and Medscape Unveil Groundbreaking Report on AI Adoption at ...","url":"https://gkc.himss.org/news/himss-and-medscape-unveil-groundbreaking-report-ai-adoption-health-systems","summary":"The findings, highlighted in the Medscape & HIMSS AI Adoption by Health Systems Report 2024, reveal that 86% of respondents already leverage AI in their medical organizations, with 60% recognizing its ability to uncover health patterns and diagnoses beyond human detection."},{"title":"Adoption of artificial intelligence in healthcare: survey of health ...","url":"https://academic.oup.com/jamia/article/32/7/1093/8125015","summary":"To evaluate the current state of AI adoption in US healthcare systems, assess successes and barriers to implementation during the early generative AI era. This cross-sectional survey was conducted in Fall 2024, and included 67 health systems members of the Scottsdale Institute, a collaborative of US non-profit healthcare organizations."},{"title":"19+ AI in Healthcare Statistics for 2024: Insights & Projections","url":"https://www.allaboutai.com/resources/ai-statistics/healthcare/","summary":"Discover 19+ AI in healthcare statistics for 2024, covering public perception, market trends, and revenue projections with expert insights."},{"title":"AI in Healthcare Statistics By Market Share And Technology","url":"https://www.sci-tech-today.com/stats/ai-in-healthcare-statistics/","summary":"In the second quarter of 2024, the US held a dominant position with a 58% revenue share, reflecting its strong focus on AI development and deployment. Similarly, the rest of the world followed..."},{"title":"AI in healthcare statistics: 62 findings from 18 research reports - Keragon","url":"https://www.keragon.com/blog/ai-in-healthcare-statistics","summary":"⚪️ Consumer adoption of gen AI for health reasons has remained flat, with just 37% of consumers using it in 2024 versus 40% in 2023. Source: Deloitte Center for Health Solutions's' 2024 Health Care Consumer Survey"},{"title":"New AMA report highlights physician optimism about AI in health care","url":"https://www.medicaleconomics.com/view/new-ama-report-highlights-physician-optimism-about-ai-in-health-care","summary":"The adoption of artificial intelligence (AI) in health care nearly doubled in 2024 compared to 2023 — a reflection of growing enthusiasm and decreasing apprehension toward the technology, despite some lingering concerns, according to a new report from the American Medical Association (AMA)."}]}
+2025/09/08 11:49:04
+=== Agent:Executor Output ===
+{"message":"Found 10 results successfully.","results":[{"title":"AI in Healthcare: Funding Resurgence for Biotech Startups in 2024","url":"https://techainews.digital/2024/12/12/ai-in-healthcare-funding-resurgence-for-biotech-startups-in-2024/","summary":"In summary, the funding landscape for AI-driven biotech and healthcare startups in 2024 is showing a marked revival after a challenging previous year. With an influx of capital reflecting strong investor interest, companies harnessing AI to revolutionize drug discovery and enhance healthcare processes are at the forefront of this resurgence."},{"title":"AI Healthcare Startups: Investment & Funding Trends","url":"https://www.delveinsight.com/blog/ai-healthcare-startups-funding-trends","summary":"Discover how AI healthcare startups are attracting billions in funding and reshaping the future of healthcare and pharma."},{"title":"How healthcare AI led a 'paradigm shift' in a $23B year for startups","url":"https://carta.com/data/industry-spotlight-healthcare-2024/","summary":"The rate of all new healthcare investments in which valuations were lower than that of the previous round declined slightly over the course of 2024, settling at 19% in the final quarter of the year. Still, down rounds remain a persistent aspect of the healthcare fundraising landscape."},{"title":"AI-Healthcare Startups Surge with Record Funding: A Look at 2025's ...","url":"https://opentools.ai/news/ai-healthcare-startups-surge-with-record-funding-a-look-at-2025s-promising-landscape","summary":"Notably, the landscape of AI-healthcare startup funding has demonstrated robust growth, amounting to $7.5 billion worldwide in 2024, with an additional $1.68 billion earmarked for early 2025."},{"title":"The State of the Funding Market for AI Companies: A 2024 - 2025 Outlook","url":"https://www.mintz.com/insights-center/viewpoints/2166/2025-03-10-state-funding-market-ai-companies-2024-2025-outlook","summary":"In 2024, these AI-driven companies captured a substantial share of venture capital funding. Overall, venture capital investment in healthcare rose to $23 billion, up from $20 billion in 2023, with nearly 30% of the 2024 funding directed toward AI-focused startups."},{"title":"2024 year-end market overview: Davids and Goliaths - Rock Health","url":"https://rockhealth.com/insights/2024-year-end-market-overview-davids-and-goliaths/","summary":"These dual trends—early-stage startup activity amidst big moves by large healthcare players—have created a David and Goliath dynamic in the healthcare innovation landscape. We see a future where David and Goliath can coexist, and even thrive together, to drive impactful change in healthcare."},{"title":"AI and TechBio Funding Lead the Charge: 2024 Digital Health Funding ...","url":"https://www.galengrowth.com/ai-and-techbio-funding-lead-the-charge-2024-digital-health-funding-resurgence/","summary":"Global digital health funding rebounded to $25.1 billion in 2024, marking a 5.5% year-on-year increase. Europe saw the highest regional growth, with funding surging by 27% from 2023, while the US retained its dominance, contributing $17.2 billion in funding."},{"title":"Healthcare venture capital investment amplified by AI in 2024: report","url":"https://www.healthcaredive.com/news/healthcare-venture-captial-funding-ai-boost-2024-silicon-valley-bank/736870/","summary":"Venture capital investment in healthcare increased in 2024, spurred by enthusiasm for artificial intelligence startups, according to a report by Silicon Valley Bank. Funding for U.S...."},{"title":"These 10 healthcare AI startups have pulled in the most cash","url":"https://pitchbook.com/news/articles/healthcare-ai-startups-vc-funding","summary":"As of Q3 2024, funding had surpassed 2023's total with $10.5 billion generated across 511 deals. These 10 healthcare AI startups have raised the most capital from VC investors, leading the space to new heights while capitalizing on the AI frenzy."},{"title":"AI Finance for Healthcare 2024 - byteplus.com","url":"https://www.byteplus.com/en/topic/394250","summary":"Explore AI finance trends in healthcare for 2024, including investment insights, market growth, and emerging technologies."}]}
+2025/09/08 11:49:05
+=== Agent:Executor Output ===
+{"message":"Found 10 results successfully.","results":[{"title":"Artificial Intelligence in Healthcare Market Size to Hit USD 613.81 Bn ...","url":"https://www.precedenceresearch.com/artificial-intelligence-in-healthcare-market","summary":"The global artificial intelligence (AI) in healthcare market size reached USD 26.69 billion in 2024 and is projected to hit around USD 613.81 billion by 2034, at a CAGR of 36.83%."},{"title":"AI in Healthcare Market Size, Share | Growth Report [2025-2032]","url":"https://www.fortunebusinessinsights.com/industry-reports/artificial-intelligence-in-healthcare-market-100534","summary":"The global AI in healthcare market size was valued at $29.01 billion in 2024 & is projected to grow from $39.25 billion in 2025 to $504.17 billion by 2032"},{"title":"AI in Healthcare Statistics 2025: Overview of Trends","url":"https://docus.ai/blog/ai-healthcare-statistics","summary":"As we step into 2025, let's see how AI in healthcare statistics from 2024 are shaping trends in patient care, diagnostics, and innovation."},{"title":"Artificial Intelligence (AI) in Healthcare Market Size to","url":"https://www.globenewswire.com/news-release/2025/04/02/3054390/0/en/Artificial-Intelligence-AI-in-Healthcare-Market-Size-to-Hit-USD-613-81-Bn-by-2034.html","summary":"Ottawa, April 02, 2025 (GLOBE NEWSWIRE) -- According to Precedence Research, the artificial intelligence (AI) in healthcare market size was valued at USD 26.69 billion in 2024, calculated..."},{"title":"19+ AI in Healthcare Statistics for 2024: Insights & Projections","url":"https://www.allaboutai.com/resources/ai-statistics/healthcare/","summary":"Discover 19+ AI in healthcare statistics for 2024, covering public perception, market trends, and revenue projections with expert insights."},{"title":"AI in Healthcare Market Leads 37.66% Healthy CAGR by 2034","url":"https://www.towardshealthcare.com/insights/ai-in-healthcare-market","summary":"According to market projections, the AI in healthcare sector is expected to grow from USD 27.59 billion in 2024 to USD 674.19 billion by 2034, reflecting a CAGR of 37.66%."},{"title":"AI In Healthcare Market Size, Share | Industry Report, 2033","url":"https://www.globalmarketstatistics.com/market-reports/artificial-intelligence-in-healthcare-market-12394","summary":"Market Size and Growth: The Artificial Intelligence in Healthcare Market Market size was USD 5011.24 Million in 2024, is projected to grow to USD 5762.41 Million by 2025 and exceed USD 8966.05 Million by 2033, with a CAGR of 21.4% from 2025-2033."},{"title":"AI in Healthcare Market Outlook 2024-2033: Trends ... - LinkedIn","url":"https://www.linkedin.com/pulse/ai-healthcare-market-outlook-20242033-trends-nil8c","summary":"AI in Healthcare Market size was valued at USD 11.6 Billion in 2024 and is forecasted to grow at a CAGR of 24% from 2026 to 2033, reaching USD 64.5 Billion by 2033."},{"title":"AI In Healthcare Market Size to Reach $187.7 Billion by 2030 at CAGR 38 ...","url":"https://www.prnewswire.com/news-releases/ai-in-healthcare-market-size-to-reach-187-7-billion-by-2030-at-cagr-38-5---grand-view-research-inc-302439558.html","summary":"SAN FRANCISCO, April 28, 2025 /PRNewswire/ -- The global AI in healthcare market size is expected to reach USD 187.7 billion by 2030, registering a CAGR of 38.5% from 2024 to 2030,..."},{"title":"AI in Healthcare Market to Soar to USD 629 B by 2032 with 51.87% CAGR","url":"https://www.medboundtimes.com/medbound-blog/ai-healthcare-market-set-to-soar-to-usd629b","summary":"The global artificial intelligence in healthcare market was valued at USD 22.23 billion in 2024 and is projected to skyrocket to USD 629.09 billion by 2032, exhibiting a compound annual growth rate (CAGR) of 51.87% from 2025 to 2032, according to Data Bridge Market Research."}]}
+```
+
+## 总结
+
+Plan-Execute Agent 通过「规划-执行-反思」的闭环工作流,将复杂任务拆解为可执行步骤,结合工具调用与动态调整,有效提升了任务完成的可靠性与效率。其核心优势在于:
+
+- **结构化任务拆解**:降低复杂问题的认知负荷
+- **工具集成能力**:无缝对接外部工具(搜索、计算、数据库等)
+- **动态适应性**:根据执行反馈实时调整策略,应对不确定性
+
+通过 Eino ADK 提供的 `PlanExecuteAgent`,开发者可快速搭建具备复杂任务处理能力的智能体系统,适用于研究分析、自动化办公、智能客服等多种场景。
diff --git a/content/zh/docs/eino/core_modules/eino_adk/agent_implementation/supervisor.md b/content/zh/docs/eino/core_modules/eino_adk/agent_implementation/supervisor.md
new file mode 100644
index 00000000000..b0009eba555
--- /dev/null
+++ b/content/zh/docs/eino/core_modules/eino_adk/agent_implementation/supervisor.md
@@ -0,0 +1,501 @@
+---
+Description: ""
+date: "2025-09-30"
+lastmod: ""
+tags: []
+title: 'Eino ADK MultiAgent: Supervisor Agent'
+weight: 4
+---
+
+## Supervisor Agent 概述
+
+### Import Path
+
+```
+import github.com/cloudwego/eino/adk/prebuilt/supervisor
+```
+
+### 什么是 Supervisor Agent?
+
+Supervisor Agent 是一种中心化多 Agent 协作模式,由一个监督者(Supervisor Agent) 和多个子 Agent(SubAgents)组成。Supervisor 负责任务的分配、子 Agent 执行过程的监控,以及子 Agent 完成后的结果汇总与下一步决策;子 Agent 则专注于执行具体任务,并在完成后通过 WithDeterministicTransferTo 自动将任务控制权交回 Supervisor。
+
+
+
+该模式适用于需要动态协调多个专业 Agent 完成复杂任务的场景,例如:
+
+- 科研项目管理(Supervisor 分配调研、实验、报告撰写任务给不同子 Agent)。
+- 客户服务流程(Supervisor 根据用户问题类型,分配给技术支持、售后、销售等子 Agent)。
+
+### Supervisor Agent 结构
+
+Supervisor 模式的核心结构如下:
+
+- **Supervisor Agent**:作为协作核心,具备任务分配逻辑(如基于规则或 LLM 决策),可通过 `SetSubAgents` 将子 Agent 纳入管理。
+- **SubAgents**:每个子 Agent 被 WithDeterministicTransferTo 增强,预设 `ToAgentNames` 为 Supervisor 名称,确保任务完成后自动转让回 Supervisor。
+
+### Supervisor Agent 特点
+
+1. **确定性回调**:子 Agent 执行完毕(未中断)后,通过 WithDeterministicTransferTo 自动触发 Transfer 事件,将任务控制权交回 Supervisor,避免协作流程中断。
+2. **中心化控制**:Supervisor 统一管理子 Agent,可根据子 Agent 的执行结果动态调整任务分配(如分配给其他子 Agent 或直接生成最终结果)。
+3. **松耦合扩展**:子 Agent 可独立开发、测试和替换,只需确保实现 Agent 接口并绑定到 Supervisor,即可接入协作流程。
+4. **支持中断与恢复**:若子 Agent 或 Supervisor 支持 `ResumableAgent` 接口,协作流程可在中断后恢复,保持任务上下文连续性。
+
+### Supervisor Agent 运行流程
+
+Supervisor 模式的典型协作流程如下:
+
+1. **任务启动**:Runner 触发 Supervisor 运行,输入初始任务(如“完成一份 LLM 发展历史报告”)。
+2. **任务分配**:Supervisor 根据任务需求,通过 Transfer 事件将任务转让给指定子 Agent(如“调研 Agent”)。
+3. **子 Agent 执行**:子 Agent 执行具体任务(如调研 LLM 关键里程碑),并生成执行结果事件。
+4. **自动回调**:子 Agent 完成后,WithDeterministicTransferTo 触发 Transfer 事件,将任务转让回 Supervisor。
+5. **结果处理**:Supervisor 接收子 Agent 的结果,决定下一步(如分配给“报告撰写 Agent”继续处理,或直接输出最终结果)。
+
+## Supervisor Agent 使用示例
+
+### 场景说明
+
+创建一个科研报告生成系统:
+
+- **Supervisor**:基于用户输入的研究主题,分配任务给“调研 Agent”和“撰写 Agent”,并汇总最终报告。
+- **调研 Agent**:负责生成研究计划(如 LLM 发展的关键阶段)。
+- **撰写 Agent**:负责根据调研计划撰写完整报告。
+
+### 代码实现
+
+#### 步骤 1:实现子 Agent
+
+首先创建两个子 Agent,分别负责调研和撰写任务:
+
+```go
+// 调研 Agent:生成研究计划
+func NewResearchAgent(model model.ToolCallingChatModel) adk.Agent {
+ agent, _ := adk.NewChatModelAgent(context.Background(), &adk.ChatModelAgentConfig{
+ Name: "ResearchAgent",
+ Description: "Generates a detailed research plan for a given topic.",
+ Instruction: `
+You are a research planner. Given a topic, output a step-by-step research plan with key stages and milestones.
+Output ONLY the plan, no extra text.`,
+ Model: model,
+ })
+ return agent
+}
+
+// 撰写 Agent:根据研究计划撰写报告
+func NewWriterAgent(model model.ToolCallingChatModel) adk.Agent {
+ agent, _ := adk.NewChatModelAgent(context.Background(), &adk.ChatModelAgentConfig{
+ Name: "WriterAgent",
+ Description: "Writes a report based on a research plan.",
+ Instruction: `
+You are an academic writer. Given a research plan, expand it into a structured report with details and analysis.
+Output ONLY the report, no extra text.`,
+ Model: model,
+ })
+ return agent
+}
+```
+
+#### 步骤 2:实现 Supervisor Agent
+
+创建 Supervisor Agent,定义任务分配逻辑(此处简化为基于规则:先分配给调研 Agent,再分配给撰写 Agent):
+
+```go
+// Supervisor Agent:协调调研和撰写任务
+func NewReportSupervisor(model model.ToolCallingChatModel) adk.Agent {
+ agent, _ := adk.NewChatModelAgent(context.Background(), &adk.ChatModelAgentConfig{
+ Name: "ReportSupervisor",
+ Description: "Coordinates research and writing to generate a report.",
+ Instruction: `
+You are a project supervisor. Your task is to coordinate two sub-agents:
+- ResearchAgent: generates a research plan.
+- WriterAgent: writes a report based on the plan.
+
+Workflow:
+1. When receiving a topic, first transfer the task to ResearchAgent.
+2. After ResearchAgent finishes, transfer the task to WriterAgent with the plan as input.
+3. After WriterAgent finishes, output the final report.`,
+ Model: model,
+ })
+ return agent
+}
+```
+
+#### 步骤 3:组合 Supervisor 与子 Agent
+
+使用 `NewSupervisor` 将 Supervisor 和子 Agent 组合:
+
+```go
+import (
+ "context"
+
+ "github.com/cloudwego/eino-ext/components/model/openai"
+ "github.com/cloudwego/eino/adk"
+ "github.com/cloudwego/eino/adk/prebuilt/supervisor"
+ "github.com/cloudwego/eino/components/model"
+ "github.com/cloudwego/eino/schema"
+)
+
+func main() {
+ ctx := context.Background()
+
+ // 1. 创建 LLM 模型(如 GPT-4o)
+ model, _ := openai.NewChatModel(ctx, &openai.ChatModelConfig{
+ APIKey: "YOUR_API_KEY",
+ Model: "gpt-4o",
+ })
+
+ // 2. 创建子 Agent 和 Supervisor
+ researchAgent := NewResearchAgent(model)
+ writerAgent := NewWriterAgent(model)
+ reportSupervisor := NewReportSupervisor(model)
+
+ // 3. 组合 Supervisor 与子 Agent
+ supervisorAgent, _ := supervisor.New(ctx, &supervisor.Config{
+ Supervisor: reportSupervisor,
+ SubAgents: []adk.Agent{researchAgent, writerAgent},
+ })
+
+ // 4. 运行 Supervisor 模式
+ iter := supervisorAgent.Run(ctx, &adk.AgentInput{
+ Messages: []adk.Message{
+ schema.UserMessage("Write a report on the history of Large Language Models."),
+ },
+ EnableStreaming: true,
+ })
+
+ // 5. 消费事件流(打印结果)
+ for {
+ event, ok := iter.Next()
+ if !ok {
+ break
+ }
+ if event.Output != nil && event.Output.MessageOutput != nil {
+ msg, _ := event.Output.MessageOutput.GetMessage()
+ println("Agent[" + event.AgentName + "]:\n" + msg.Content + "\n===========")
+ }
+ }
+}
+```
+
+### 运行结果
+
+```markdown
+Agent[ReportSupervisor]:
+
+===========
+Agent[ReportSupervisor]:
+successfully transferred to agent [ResearchAgent]
+===========
+Agent[ResearchAgent]:
+1. **Scope Definition & Background Research**
+ - Task: Define "Large Language Model" (LLM) for the report (e.g., size thresholds, key characteristics: transformer-based, large-scale pretraining, general-purpose).
+ - Task: Identify foundational NLP/AI concepts pre-LLMs (statistical models, early neural networks, word embeddings) to contextualize origins.
+ - Milestone: 3-day literature review of academic definitions, industry reports, and AI historiographies to finalize scope.
+
+2. **Chronological Periodization**
+ - Task: Divide LLM history into distinct eras (e.g., Pre-2017: Pre-transformer foundations; 2017-2020: Transformer revolution & early LLMs; 2020-Present: Scaling & mainstream adoption).
+ - Task: Map key events, models, and breakthroughs per era (e.g., 2017: "Attention Is All You Need"; 2018: GPT-1/BERT; 2020: GPT-3; 2022: ChatGPT; 2023: Llama 2).
+ - Milestone: 10-day timeline draft with annotated model releases, research papers, and technological shifts.
+
+3. **Key Technical Milestones**
+ - Task: Deep-dive into critical innovations (transformer architecture, pretraining-fine-tuning paradigm, scaling laws, in-context learning).
+ - Task: Extract details from seminal papers (authors, institutions, methodologies, performance benchmarks).
+ - Milestone: 1-week analysis of 5-7 foundational papers (e.g., Vaswani et al. 2017; Radford et al. 2018; Devlin et al. 2018) with technical summaries.
+
+4. **Stakeholder Mapping**
+ - Task: Identify key organizations (OpenAI, Google DeepMind, Meta AI, Microsoft Research) and academic labs (Stanford, Berkeley) driving LLM development.
+ - Task: Document institutional contributions (e.g., OpenAI’s GPT series, Google’s BERT/PaLM, Meta’s Llama) and research priorities (open vs. closed models).
+ - Milestone: 5-day stakeholder profile draft with org-specific timelines and model lineages.
+
+5. **Technical Evolution & Innovation Trajectory**
+ - Task: Analyze shifts in architecture (from RNNs/LSTMs to transformers), training paradigms (pretraining + fine-tuning → instruction tuning → RLHF), and compute scaling (parameters, data size, GPU usage over time).
+ - Task: Link technical changes to performance improvements (e.g., GPT-1 (124M params) vs. GPT-4 (100B+ params): task generalization, emergent abilities).
+ - Milestone: 1-week technical trajectory report with data visualizations (param scaling, benchmark scores over time).
+
+6. **Impact & Societal Context**
+ - Task: Research LLM impact on NLP tasks (translation, summarization, QA) and beyond (education, content creation, policy).
+ - Task: Document cultural/industry shifts (rise of prompt engineering, "AI-native" products, public perception post-ChatGPT).
+ - Milestone: 5-day impact analysis integrating case studies (e.g., GitHub Copilot, healthcare LLMs) and media/scholarly discourse.
+
+7. **Challenges & Critiques (Historical Perspective)**
+ - Task: Track historical limitations (pre-2020: data sparsity, task specificity; post-2020: bias, misinformation, energy use) and responses (e.g., 2019: BERT bias audits; 2023: EU AI Act).
+ - Task: Cite key critiques (e.g., "On the Dangers of Stochastic Parrots," 2021) and industry/academic reactions.
+ - Milestone: 5-day challenge timeline linking issues to their emergence and mitigation efforts.
+
+8. **Synthesis & Narrative Drafting**
+ - Task: Integrate chronological, technical, and societal data into a coherent narrative (origins → revolution → scaling → mainstream impact).
+ - Task: Outline report structure (Abstract, Introduction, Era-by-Era Analysis, Key Innovations, Stakeholders, Impact, Challenges, Conclusion).
+ - Milestone: 1-week first draft of full report (8,000–10,000 words).
+
+9. **Validation & Fact-Checking**
+ - Task: Verify model release dates, paper citations, parameter counts, and stakeholder claims via primary sources (original papers, official press releases, archived GitHub repos).
+ - Task: Cross-check with secondary sources (AI history books, expert interviews, peer-reviewed historiographies).
+ - Milestone: 3-day validation report flagging/correcting inaccuracies.
+
+10. **Finalization & Revision**
+ - Task: Edit for clarity, narrative flow, and consistency; refine visuals (timelines, param scaling charts).
+ - Task: Format references (APA/MLA) and appendices (model comparison table, key paper list).
+ - Milestone: 2-day final report submission.
+===========
+Agent[ResearchAgent]:
+
+===========
+Agent[ResearchAgent]:
+successfully transferred to agent [ReportSupervisor]
+===========
+Agent[ReportSupervisor]:
+
+===========
+Agent[ReportSupervisor]:
+successfully transferred to agent [WriterAgent]
+===========
+Agent[WriterAgent]:
+# The History of Large Language Models: From Foundations to Mainstream Revolution
+
+
+## Abstract
+Large Language Models (LLMs) represent one of the most transformative technological innovations of the 21st century, enabling machines to understand, generate, and manipulate human language with unprecedented fluency. This report traces the historical trajectory of LLMs, from their conceptual roots in early natural language processing (NLP) to their current status as mainstream tools. It examines key technical milestones—including the invention of the transformer architecture, the rise of pretraining-fine-tuning paradigms, and the scaling of model parameters—and contextualizes these within the contributions of academic labs and tech giants. The report also analyzes societal impacts, from revolutionizing NLP tasks to sparking debates over bias, misinformation, and AI regulation. By synthesizing chronological, technical, and cultural data, this history reveals how LLMs evolved from niche research experiments to agents of global change.
+
+
+## 1. Introduction: Defining Large Language Models
+A **Large Language Model (LLM)** is a type of machine learning model designed to process and generate human language by learning patterns from massive text datasets. Key characteristics include: (1) a transformer-based architecture, enabling parallel processing of text sequences; (2) large-scale pretraining on diverse corpora (e.g., books, websites, articles); (3) general-purpose functionality, allowing adaptation to tasks like translation, summarization, or dialogue without task-specific engineering; and (4) scale, typically defined by billions (or tens of billions) of parameters (adjustable weights that capture linguistic patterns).
+
+LLMs emerged from decades of NLP research, building on foundational concepts like statistical models (e.g., n-grams), early neural networks (e.g., recurrent neural networks [RNNs]), and word embeddings (e.g., Word2Vec, GloVe). By the 2010s, these predecessors had laid groundwork for "language understanding," but were limited by task specificity (e.g., a model trained for translation could not summarize text) and data sparsity. LLMs addressed these gaps by prioritizing scale, generality, and architectural innovation—ultimately redefining the boundaries of machine language capability.
+
+
+## 2. Era-by-Era Analysis: The Evolution of LLMs
+
+### 2.1 Pre-2017: Pre-Transformer Foundations (1950s–2016)
+The roots of LLMs lie in mid-20th-century NLP, when researchers first sought to automate language tasks. Early efforts relied on rule-based systems (e.g., 1950s machine translation using syntax rules) and statistical methods (e.g., 1990s n-gram models for speech recognition). By the 2010s, neural networks gained traction: RNNs and long short-term memory (LSTM) models (Hochreiter & Schmidhuber, 1997) enabled sequence modeling, while word embeddings (Mikolov et al., 2013) represented words as dense vectors, capturing semantic relationships.
+
+Despite progress, pre-2017 models faced critical limitations: RNNs/LSTMs processed text sequentially, making them slow to train and unable to handle long-range dependencies (e.g., linking "it" in a sentence to a noun paragraphs earlier). Data was also constrained: models like Word2Vec trained on millions, not billions, of tokens. These bottlenecks set the stage for a paradigm shift.
+
+
+### 2.2 2017–2020: The Transformer Revolution and Early LLMs
+The year 2017 marked the dawn of the LLM era with the publication of *"Attention Is All You Need"* (Vaswani et al.), which introduced the **transformer architecture**. Unlike RNNs, transformers use "self-attention" mechanisms to weigh the importance of different words in a sequence simultaneously, enabling parallel computation and capturing long-range dependencies. This breakthrough reduced training time and improved performance on language tasks.
+
+#### Key Models and Breakthroughs:
+- **2018**: OpenAI released **GPT-1** (Radford et al.), the first transformer-based LLM. With 124 million parameters, it introduced the "pretraining-fine-tuning" paradigm: pretraining on a large unlabeled corpus (BooksCorpus) to learn general language patterns, then fine-tuning on task-specific labeled data (e.g., sentiment analysis).
+- **2018**: Google published **BERT** (Devlin et al.), a bidirectional transformer that processed text from left-to-right *and* right-to-left, outperforming GPT-1 on context-dependent tasks like question answering. BERT’s success popularized "contextual embeddings," where word meaning depends on surrounding text (e.g., "bank" as a financial institution vs. a riverbank).
+- **2019**: OpenAI scaled up with **GPT-2** (1.5 billion parameters), demonstrating improved text generation but sparking early concerns about misuse (OpenAI initially delayed full release over fears of disinformation).
+- **2020**: Google’s **T5** (Text-to-Text Transfer Transformer) unified NLP tasks under a single "text-to-text" framework (e.g., translating "translate English to French: Hello" to "Bonjour"), simplifying model adaptation.
+
+
+### 2.3 2020–Present: Scaling, Emergence, and Mainstream Adoption
+The 2020s saw LLMs transition from research curiosities to global phenomena, driven by exponential scaling of parameters, data, and compute.
+
+#### Key Developments:
+- **2020**: OpenAI’s **GPT-3** (175 billion parameters) marked a turning point. Trained on 45 terabytes of text, it exhibited "few-shot" and "zero-shot" learning—adapting to tasks with minimal examples (e.g., "Write a poem about AI" with no prior poetry training). GPT-3’s release via API (OpenAI Playground) introduced LLMs to developers, enabling early applications like chatbots and code generation.
+- **2022**: **ChatGPT** (based on GPT-3.5) brought LLMs to the public. Launched in November, its user-friendly interface and conversational ability sparked a viral explosion (100 million users by January 2023). ChatGPT refined training with **Reinforcement Learning from Human Feedback (RLHF)**, aligning outputs with human preferences (e.g., helpfulness, safety).
+- **2023**: Meta released **Llama 2** (7B–70B parameters), an open-source LLM that lowered barriers to entry, allowing researchers and startups to fine-tune models without proprietary access. Meanwhile, OpenAI’s **GPT-4** (100B+ parameters) expanded multimodality (text + images) and improved reasoning (e.g., solving math problems, coding).
+- **2023–2024**: The "race to scale" continued with models like Google’s **PaLM 2** (540B parameters), Anthropic’s **Claude 2** (200B+ parameters), and open-source alternatives (e.g., Mistral, Falcon). Compute usage skyrocketed: training GPT-3 required ~3.14e23 floating-point operations (FLOPs), equivalent to 355 years of a single GPU’s work.
+
+
+## 3. Key Technical Milestones
+### 3.1 The Transformer Architecture (2017)
+Vaswani et al.’s *"Attention Is All You Need"* (Google, University of Toronto) replaced RNNs with self-attention, a mechanism that computes "attention scores" between every pair of words in a sequence. For example, in "The cat sat on the mat; it purred," self-attention links "it" to "cat." This parallel processing reduced training time from weeks (for RNNs) to days, enabling larger models.
+
+### 3.2 Pretraining-Fine-Tuning Paradigm (2018)
+GPT-1 and BERT established the now-standard workflow: (1) Pretrain on a large, unlabeled corpus (e.g., Common Crawl, a web scrape of 1.1 trillion tokens) to learn syntax, semantics, and world knowledge; (2) Fine-tune on task-specific data (e.g., GLUE, a benchmark of 10 NLP tasks). This decoupled language learning from task engineering, enabling generalization.
+
+### 3.3 Scaling Laws and Emergent Abilities (2020s)
+In 2020, OpenAI researchers articulated **scaling laws**: model performance improves predictably with increased parameters, data, and compute. By 2022, this led to "emergent abilities"—skills not present in smaller models, such as GPT-3’s in-context learning or GPT-4’s multi-step reasoning.
+
+### 3.4 Instruction Tuning and RLHF (2022)
+Post-2020, training shifted from task-specific fine-tuning to **instruction tuning** (training on natural language instructions like "Summarize this article") and **RLHF** (rewarding models for human-preferred outputs). These methods made LLMs more usable: ChatGPT, for instance, follows prompts like "Explain quantum physics like I’m 5" without explicit fine-tuning.
+
+
+## 4. Stakeholders: The Ecosystem of LLM Development
+LLM evolution has been driven by a mix of tech giants, academic labs, and startups, each with distinct priorities:
+
+### 4.1 Tech Giants: Closed vs. Open Models
+- **OpenAI** (founded 2015, backed by Microsoft): Pioneered the GPT series, prioritizing commercialization via closed APIs (e.g., ChatGPT Plus, GPT-4 API). Focus: user-friendliness and safety (via RLHF).
+- **Google DeepMind**: Developed BERT, T5, and PaLM, integrating LLMs into products like Google Search (via BERT) and Bard. Balances closed (PaLM) and open (T5) models.
+- **Meta AI**: Advocated for open science with Llama 1/2 (2023), releasing weights for research and commercial use. Meta’s "open" approach aims to democratize LLM access and accelerate safety research.
+- **Microsoft**: Partnered with OpenAI (2019–present), providing Azure compute and integrating GPT into Bing (search), Office (Copilot), and GitHub (Copilot X for coding).
+
+### 4.2 Academic Labs
+- **Stanford NLP**: Contributed to BERT and T5 research; developed HELM (Holistic Evaluation of Language Models), a benchmark for LLM safety and fairness.
+- **UC Berkeley**: Studied LLM bias (e.g., 2021 paper "On the Dangers of Stochastic Parrots," critiquing LLMs as "statistical mimics" lacking true understanding).
+
+
+## 5. Impact & Societal Context
+### 5.1 Transforming NLP and Beyond
+LLMs have redefined NLP performance: By 2023, GPT-4 outperformed humans on the MMLU benchmark (a test of 57 subjects, including math, law, and biology), scoring 86.4% vs. 86.5% for humans. Beyond NLP, they have revolutionized:
+- **Content Creation**: Tools like Jasper and Copy.ai automate marketing copy; artists use DALL-E (paired with LLMs) for text-to-image generation.
+- **Education**: Khan Academy’s Khanmigo tutors students; Coursera uses LLMs for personalized feedback.
+- **Coding**: GitHub Copilot (2021) generates code from comments, boosting developer productivity by 55% (Microsoft, 2023).
+
+### 5.2 Cultural Shifts
+- **Prompt Engineering**: The rise of "prompt engineers"—professionals skilled in crafting text inputs to elicit desired LLM outputs—became a new career path.
+- **AI-Native Products**: Startups like Character.AI (chatbots with distinct personalities) and Perplexity (AI-powered search) emerged as "LLM-first" services.
+- **Public Perception**: Post-ChatGPT, LLMs shifted from "AI hype" to tangible utility, though skepticism persists (e.g., 62% of U.S. adults worry about job displacement, Pew Research, 2023).
+
+
+## 6. Challenges & Critiques: A Historical Perspective
+### 6.1 Technical Limitations
+- **Pre-2020**: Data sparsity (small corpora limited generalization); task specificity (models like BERT required retraining for new tasks).
+- **Post-2020**: **Hallucinations** (fabricating facts, e.g., GPT-3 citing fake research papers); **energy use** (training GPT-3 emitted ~500 tons of CO₂, equivalent to 125 round-trip flights from NYC to London); **computational inequality** (only tech giants can afford 100B+ parameter models).
+
+### 6.2 Societal Risks
+- **Bias**: Early LLMs mirrored training data biases (e.g., BERT associated "doctor" with "male" in 2019 audits). Responses included bias mitigation datasets (e.g., WinoBias) and audits (e.g., Stanford’s Gender Shades).
+- **Misinformation**: GPT-2’s realistic text generation prompted calls for regulation; by 2023, deepfakes (e.g., AI-generated political speeches) became a policy focus.
+- **Regulation**: The EU AI Act (2024) classified LLMs as "high-risk," requiring transparency (e.g., disclosing AI-generated content) and safety testing.
+
+
+## 7. Conclusion: A Revolution in Five Years
+The history of LLMs is a story of exponential progress: from the transformer’s 2017 invention to ChatGPT’s 2022 viral explosion, a mere five years. What began as an academic breakthrough—parallelizing text processing with self-attention—evolved into a technology that writes code, tutors students, and shapes global policy.
+
+Yet challenges persist: scaling has outpaced our understanding of how LLMs "think," and debates over bias, energy use, and access (closed vs. open models) intensify. As we look to the future, this history reminds us that LLMs are not just technical achievements, but mirrors of society—reflecting both our ingenuity and our flaws. Their next chapter will depend on balancing innovation with responsibility, ensuring these models serve as tools for collective progress.
+
+
+## References
+- Devlin, J., et al. (2018). *BERT: Pre-training of deep bidirectional transformers for language understanding*. NAACL.
+- Hochreiter, S., & Schmidhuber, J. (1997). *Long short-term memory*. Neural Computation.
+- Mikolov, T., et al. (2013). *Efficient estimation of word representations in vector space*. ICLR.
+- Radford, A., et al. (2018). *Improving language understanding by generative pre-training*. OpenAI.
+- Vaswani, A., et al. (2017). *Attention is all you need*. NeurIPS.
+- Weidinger, L., et al. (2021). *On the dangers of stochastic parrots: Can language models be too big?*. ACM FAccT.
+===========
+Agent[WriterAgent]:
+
+===========
+Agent[WriterAgent]:
+successfully transferred to agent [ReportSupervisor]
+===========
+```
+
+## WithDeterministicTransferTo
+
+### 什么是 WithDeterministicTransferTo?
+
+`WithDeterministicTransferTo` 是 Eino ADK 提供的 Agent 增强工具,用于为 Agent 注入任务转让(Transfer)能力 。它允许开发者为目标 Agent 预设固定的任务转让路径,当该 Agent 完成任务(未被中断)时,会自动生成 Transfer 事件,将任务流转到预设的目标 Agent。
+
+这一能力是构建 Supervisor Agent 协作模式的基础,确保子 Agent 在执行完毕后能可靠地将任务控制权交回监督者(Supervisor),形成“分配-执行-反馈”的闭环协作流程。
+
+### WithDeterministicTransferTo 核心实现
+
+#### 配置结构
+
+通过 `DeterministicTransferConfig` 定义任务转让的核心参数:
+
+```go
+// 包装方法
+func AgentWithDeterministicTransferTo(_ context.Context, config *DeterministicTransferConfig) Agent
+
+// 配置详情
+type DeterministicTransferConfig struct {
+ Agent Agent // 被增强的目标 Agent
+ ToAgentNames []string // 任务完成后转让的目标 Agent 名称列表
+}
+```
+
+- `Agent`:需要添加转让能力的原始 Agent。
+- `ToAgentNames`:当 `Agent` 完成任务且未中断时,自动转让任务的目标 Agent 名称列表(按顺序转让)。
+
+#### Agent 包装
+
+WithDeterministicTransferTo 会对原始 Agent 进行包装,根据其是否实现 `ResumableAgent` 接口(支持中断与恢复),分别返回 `agentWithDeterministicTransferTo` 或 `resumableAgentWithDeterministicTransferTo` 实例,确保增强能力与 Agent 原有功能(如 `Resume` 方法)兼容。
+
+包装后的 Agent 会覆盖 `Run` 方法(对 `ResumableAgent` 还会覆盖 `Resume` 方法),在原始 Agent 的事件流基础上追加 Transfer 事件:
+
+```go
+// 对普通 Agent 的包装
+type agentWithDeterministicTransferTo struct {
+ agent Agent // 原始 Agent
+ toAgentNames []string // 目标 Agent 名称列表
+}
+
+// Run 方法:执行原始 Agent 任务,并在任务完成后追加 Transfer 事件
+func (a *agentWithDeterministicTransferTo) Run(ctx context.Context, input *AgentInput, options ...AgentRunOption) *AsyncIterator[*AgentEvent] {
+ aIter := a.agent.Run(ctx, input, options...)
+
+ iterator, generator := NewAsyncIteratorPair[*AgentEvent]()
+
+ // 异步处理原始事件流,并追加 Transfer 事件
+ go appendTransferAction(ctx, aIter, generator, a.toAgentNames)
+
+ return iterator
+}
+```
+
+对于 `ResumableAgent`,额外实现 `Resume` 方法,确保恢复执行后仍能触发确定性转让:
+
+```go
+type resumableAgentWithDeterministicTransferTo struct {
+ agent ResumableAgent // 支持恢复的原始 Agent
+ toAgentNames []string // 目标 Agent 名称列表
+}
+
+// Resume 方法:恢复执行原始 Agent 任务,并在完成后追加 Transfer 事件
+func (a *resumableAgentWithDeterministicTransferTo) Resume(ctx context.Context, info *ResumeInfo, opts ...AgentRunOption) *AsyncIterator[*AgentEvent] {
+ aIter := a.agent.Resume(ctx, info, opts...)
+ iterator, generator := NewAsyncIteratorPair[*AgentEvent]()
+ go appendTransferAction(ctx, aIter, generator, a.toAgentNames)
+ return iterator
+}
+```
+
+#### 事件流追加 Transfer 事件
+
+`appendTransferAction` 是实现确定性转让的核心逻辑,它会消费原始 Agent 的事件流,在 Agent 任务正常结束(未中断)后,自动生成并发送 Transfer 事件到目标 Agent:
+
+```go
+func appendTransferAction(ctx context.Context, aIter *AsyncIterator[*AgentEvent], generator *AsyncGenerator[*AgentEvent], toAgentNames []string) {
+ defer func() {
+ // 异常处理:捕获 panic 并通过事件传递错误
+ if panicErr := recover(); panicErr != nil {
+ generator.Send(&AgentEvent{Err: safe.NewPanicErr(panicErr, debug.Stack())})
+ }
+ generator.Close() // 事件流结束,关闭生成器
+ }()
+
+ interrupted := false
+
+ // 1. 转发原始 Agent 的所有事件
+ for {
+ event, ok := aIter.Next()
+ if !ok { // 原始事件流结束
+ break
+ }
+ generator.Send(event) // 转发事件给调用方
+
+ // 检查是否发生中断(如 InterruptAction)
+ if event.Action != nil && event.Action.Interrupted != nil {
+ interrupted = true
+ } else {
+ interrupted = false
+ }
+ }
+
+ // 2. 若未中断且存在目标 Agent,生成 Transfer 事件
+ if !interrupted && len(toAgentNames) > 0 {
+ for _, toAgentName := range toAgentNames {
+ // 生成转让消息(系统提示 + Transfer 动作)
+ aMsg, tMsg := GenTransferMessages(ctx, toAgentName)
+ // 发送系统提示事件(告知用户任务转让)
+ aEvent := EventFromMessage(aMsg, nil, schema.Assistant, "")
+ generator.Send(aEvent)
+ // 发送 Transfer 动作事件(触发任务转让)
+ tEvent := EventFromMessage(tMsg, nil, schema.Tool, tMsg.ToolName)
+ tEvent.Action = &AgentAction{
+ TransferToAgent: &TransferToAgentAction{
+ DestAgentName: toAgentName, // 目标 Agent 名称
+ },
+ }
+ generator.Send(tEvent)
+ }
+ }
+}
+```
+
+**关键逻辑**:
+
+- **事件转发**:原始 Agent 产生的所有事件(如思考、工具调用、输出结果)会被完整转发,确保业务逻辑不受影响。
+- **中断检查**:若 Agent 执行过程中被中断(如 `InterruptAction`),则不触发 Transfer(中断视为任务未正常完成)。
+- **Transfer 事件生成**:任务正常结束后,为每个 `ToAgentNames` 生成两条事件:
+ 1. 系统提示事件(`schema.Assistant` 角色):告知用户任务将转让给目标 Agent。
+ 2. Transfer 动作事件(`schema.Tool` 角色):携带 `TransferToAgentAction`,触发 ADK 运行时将任务转让给 `DestAgentName` 对应的 Agent。
+
+## 总结
+
+WithDeterministicTransferTo 为 Agent 提供了可靠的任务转让能力,是构建 Supervisor 模式的核心基石;而 Supervisor 模式通过中心化协调与确定性回调,实现了多 Agent 之间的高效协作,显著降低了复杂任务的开发与维护成本。结合两者,开发者可快速搭建灵活、可扩展的多 Agent 系统。
diff --git a/content/zh/docs/eino/core_modules/eino_adk/agent_implementation/workflow.md b/content/zh/docs/eino/core_modules/eino_adk/agent_implementation/workflow.md
new file mode 100644
index 00000000000..20b07524d1f
--- /dev/null
+++ b/content/zh/docs/eino/core_modules/eino_adk/agent_implementation/workflow.md
@@ -0,0 +1,1125 @@
+---
+Description: ""
+date: "2025-09-30"
+lastmod: ""
+tags: []
+title: 'Eino ADK: Workflow Agents'
+weight: 2
+---
+
+## Workflow Agents 概述
+
+### 导入路径
+
+```
+import github.com/cloudwego/eino/adk
+```
+
+### 什么是 Workflow Agents
+
+Workflow Agents 是 eino ADK 中的一种特殊 Agent 类型,它允许开发者以预设的流程来组织和执行多个子 Agent。
+
+与基于 LLM 自主决策的 Transfer 模式不同,Workflow Agents 采用**预设决策**的方式,按照代码中定义好的执行流程来运行子 Agent,提供了更可预测和可控的多 Agent 协作方式。
+
+Eino ADK 提供了三种基础的 Workflow Agent 类型:
+
+- **SequentialAgent**:按顺序依次执行子 Agent
+- **LoopAgent**:循环执行子 Agent 序列
+- **ParallelAgent**:并发执行多个子 Agent
+
+这些 Workflow Agent 可以相互嵌套,构建更复杂的执行流程,满足各种业务场景需求。
+
+## SequentialAgent
+
+### 功能
+
+SequentialAgent 是最基础的 Workflow Agent,它按照配置中提供的顺序,依次执行一系列子 Agent。每个子 Agent 执行完成后,其输出会通过 History 机制传递给下一个子 Agent,形成一个线性的执行链。
+
+
+
+```go
+type SequentialAgentConfig struct {
+ Name string // Agent 名称
+ Description string // Agent 描述
+ SubAgents []Agent // 子 Agent 列表,按执行顺序排列
+}
+
+func NewSequentialAgent(ctx context.Context, config *SequentialAgentConfig) (Agent, error)
+```
+
+SequentialAgent 的执行遵循以下设定:
+
+1. **线性执行**:严格按照 SubAgents 数组的顺序执行
+2. **History 传递**:每个 Agent 的执行结果都会被添加到 History 中,后续 Agent 可以访问前面 Agent 的执行历史
+3. **提前退出**:如果任何一个子 Agent 产生 ExitAction / Interrupt,整个 Sequential 流程会立即终止
+
+SequentialAgent 适用于以下场景:
+
+- **多步骤处理流程**:如数据预处理 -> 分析 -> 生成报告
+- **管道式处理**:每个步骤的输出作为下个步骤的输入
+- **有依赖关系的任务序列**:后续任务依赖前面任务的结果
+
+### 示例
+
+示例展示了如何使用 SequentialAgent 创建一个三步骤的文档处理流水线:
+
+1. **DocumentAnalyzer**:分析文档内容
+2. **ContentSummarizer**:总结分析结果
+3. **ReportGenerator**:生成最终报告
+
+```go
+package main
+
+import (
+ "context"
+ "fmt"
+ "log"
+ "os"
+
+ "github.com/cloudwego/eino-ext/components/model/openai"
+ "github.com/cloudwego/eino/adk"
+ "github.com/cloudwego/eino/components/model"
+ "github.com/cloudwego/eino/schema"
+)
+
+// 创建 ChatModel 实例
+func newChatModel() model.ToolCallingChatModel {
+ cm, err := openai.NewChatModel(context.Background(), &openai.ChatModelConfig{
+ APIKey: os.Getenv("OPENAI_API_KEY"),
+ Model: os.Getenv("OPENAI_MODEL"),
+ })
+ if err != nil {
+ log.Fatal(err)
+ }
+ return cm
+}
+
+// 文档分析 Agent
+func NewDocumentAnalyzerAgent() adk.Agent {
+ a, err := adk.NewChatModelAgent(context.Background(), &adk.ChatModelAgentConfig{
+ Name: "DocumentAnalyzer",
+ Description: "分析文档内容并提取关键信息",
+ Instruction: "你是一个文档分析专家。请仔细分析用户提供的文档内容,提取其中的关键信息、主要观点和重要数据。",
+ Model: newChatModel(),
+ })
+ if err != nil {
+ log.Fatal(err)
+ }
+ return a
+}
+
+// 内容总结 Agent
+func NewContentSummarizerAgent() adk.Agent {
+ a, err := adk.NewChatModelAgent(context.Background(), &adk.ChatModelAgentConfig{
+ Name: "ContentSummarizer",
+ Description: "对分析结果进行总结",
+ Instruction: "基于前面的文档分析结果,生成一个简洁明了的总结,突出最重要的发现和结论。",
+ Model: newChatModel(),
+ })
+ if err != nil {
+ log.Fatal(err)
+ }
+ return a
+}
+
+// 报告生成 Agent
+func NewReportGeneratorAgent() adk.Agent {
+ a, err := adk.NewChatModelAgent(context.Background(), &adk.ChatModelAgentConfig{
+ Name: "ReportGenerator",
+ Description: "生成最终的分析报告",
+ Instruction: "基于前面的分析和总结,生成一份结构化的分析报告,包含执行摘要、详细分析和建议。",
+ Model: newChatModel(),
+ })
+ if err != nil {
+ log.Fatal(err)
+ }
+ return a
+}
+
+func main() {
+ ctx := context.Background()
+
+ // 创建三个处理步骤的 Agent
+ analyzer := NewDocumentAnalyzerAgent()
+ summarizer := NewContentSummarizerAgent()
+ generator := NewReportGeneratorAgent()
+
+ // 创建 SequentialAgent
+ sequentialAgent, err := adk.NewSequentialAgent(ctx, &adk.SequentialAgentConfig{
+ Name: "DocumentProcessingPipeline",
+ Description: "文档处理流水线:分析 → 总结 → 报告生成",
+ SubAgents: []adk.Agent{analyzer, summarizer, generator},
+ })
+ if err != nil {
+ log.Fatal(err)
+ }
+
+ // 创建 Runner
+ runner := adk.NewRunner(ctx, adk.RunnerConfig{
+ Agent: sequentialAgent,
+ })
+
+ // 执行文档处理流程
+ input := "请分析以下市场报告:2024年第三季度,公司营收增长15%,主要得益于新产品线的成功推出。但运营成本也上升了8%,需要优化效率。"
+
+ fmt.Println("开始执行文档处理流水线...")
+ iter := runner.Query(ctx, input)
+
+ stepCount := 1
+ for {
+ event, ok := iter.Next()
+ if !ok {
+ break
+ }
+
+ if event.Err != nil {
+ log.Fatal(event.Err)
+ }
+
+ if event.Output != nil && event.Output.MessageOutput != nil {
+ fmt.Printf("\n=== 步骤 %d: %s ===\n", stepCount, event.AgentName)
+ fmt.Printf("%s\n", event.Output.MessageOutput.Message.Content)
+ stepCount++
+ }
+ }
+
+ fmt.Println("\n文档处理流水线执行完成!")
+}
+```
+
+运行结果为:
+
+```markdown
+开始执行文档处理流水线...
+
+=== 步骤 1: DocumentAnalyzer ===
+市场报告关键信息分析:
+
+1. 营收增长情况:
+ - 2024年第三季度,公司营收同比增长15%。
+ - 营收增长的主要驱动力是新产品线的成功推出。
+
+2. 成本情况:
+ - 运营成本上涨了8%。
+ - 成本上升提醒公司需要进行效率优化。
+
+主要观点总结:
+- 新产品线推出显著推动了营收增长,显示公司在产品创新方面取得良好成果。
+- 虽然营收提升,但运营成本的增加在一定程度上影响了盈利能力,指出了提升运营效率的重要性。
+
+重要数据:
+- 营收增长率:15%
+- 运营成本增长率:8%
+
+=== 步骤 2: ContentSummarizer ===
+总结:2024年第三季度,公司实现了15%的营收增长,主要归功于新产品线的成功推出,体现了公司产品创新能力的显著提升。然而,运营成本同时上涨了8%,对盈利能力构成一定压力,强调了优化运营效率的迫切需求。整体来看,公司在增长与成本控制之间需寻求更好的平衡以保障持续健康发展。
+
+=== 步骤 3: ReportGenerator ===
+分析报告
+
+一、执行摘要
+2024年第三季度,公司实现营收同比增长15%,主要得益于新产品线的成功推出,展现了强劲的产品创新能力。然而,运营成本也同比提升了8%,对利润空间形成一定压力。为确保持续的盈利增长,需重点关注运营效率的优化,推动成本控制与收入增长的平衡发展。
+
+二、详细分析
+1. 营收增长分析
+- 公司营收增长15%,反映出新产品线市场接受度良好,有效拓展了收入来源。
+- 新产品线的推出体现了公司研发及市场响应能力的提升,为未来持续增长奠定基础。
+
+2. 运营成本情况
+- 运营成本上升8%,可能来自原材料价格上涨、生产效率下降或销售推广费用增加等多个方面。
+- 该成本提升在一定程度上抵消了收入增长带来的利润增益,影响整体盈利能力。
+
+3. 盈利能力及效率考量
+- 营收与成本增长的不匹配显示出当前运营效率存在改进空间。
+- 优化供应链管理、提升生产自动化及加强成本控制将成为关键措施。
+
+三、建议
+1. 加强新产品线后续支持,包括市场推广和客户反馈机制,持续推动营收增长。
+2. 深入分析运营成本构成,识别主要成本驱动因素,制定针对性降低成本的策略。
+3. 推动内部流程优化与技术升级,提升生产及运营效率,缓解成本压力。
+4. 建立动态的财务监控体系,实现对营收与成本的实时跟踪与调整,确保公司财务健康。
+
+四、结论
+公司在2024年第三季度展现出了良好的增长动力,但同时面临成本上升带来的挑战。通过持续的产品创新结合有效的成本管理,未来有望实现盈利能力和市场竞争力的双重提升,推动公司稳健发展。
+
+文档处理流水线执行完成!
+```
+
+## LoopAgent
+
+### 功能
+
+LoopAgent 基于 SequentialAgent 实现,它会重复执行配置的子 Agent 序列,直到达到最大迭代次数或某个子 Agent 产生 ExitAction。LoopAgent 特别适用于需要迭代优化、反复处理或持续监控的场景。
+
+
+
+```go
+type LoopAgentConfig struct {
+ Name string // Agent 名称
+ Description string // Agent 描述
+ SubAgents []Agent // 子 Agent 列表
+ MaxIterations int // 最大迭代次数,0 表示无限循环
+}
+
+func NewLoopAgent(ctx context.Context, config *LoopAgentConfig) (Agent, error)
+```
+
+LoopAgent 的执行遵循以下设定:
+
+1. **循环执行**:重复执行 SubAgents 序列,每次循环都是一个完整的 Sequential 执行过程
+2. **History 累积**:每次迭代的结果都会累积到 History 中,后续迭代可以访问所有历史信息
+3. **条件退出**:支持通过 ExitAction 或达到最大迭代次数来终止循环,配置 `MaxIterations=0` 时表示无限循环
+
+LoopAgent 适用于以下场景:
+
+- **迭代优化**:如代码优化、参数调优等需要反复改进的任务
+- **持续监控**:定期检查状态并执行相应操作
+- **反复处理**:需要多轮处理才能达到满意结果的任务
+- **自我改进**:Agent 根据前面的执行结果不断改进自己的输出
+
+### 示例
+
+示例展示了如何使用 LoopAgent 创建一个代码优化循环:
+
+1. **CodeAnalyzer**:分析代码问题
+2. **CodeOptimizer**:根据分析结果优化代码
+3. **ExitController**:判断是否需要退出循环
+
+循环会持续执行直到代码质量达到标准或达到最大迭代次数。
+
+```go
+package main
+
+import (
+ "context"
+ "fmt"
+ "log"
+ "os"
+
+ "github.com/cloudwego/eino-ext/components/model/openai"
+ "github.com/cloudwego/eino/adk"
+ "github.com/cloudwego/eino/components/model"
+ "github.com/cloudwego/eino/schema"
+)
+
+func newChatModel() model.ToolCallingChatModel {
+ cm, err := openai.NewChatModel(context.Background(), &openai.ChatModelConfig{
+ APIKey: os.Getenv("OPENAI_API_KEY"),
+ Model: os.Getenv("OPENAI_MODEL"),
+ })
+ if err != nil {
+ log.Fatal(err)
+ }
+ return cm
+}
+
+// 代码分析 Agent
+func NewCodeAnalyzerAgent() adk.Agent {
+ a, err := adk.NewChatModelAgent(context.Background(), &adk.ChatModelAgentConfig{
+ Name: "CodeAnalyzer",
+ Description: "分析代码质量和性能问题",
+ Instruction: `你是一个代码分析专家。请分析提供的代码,识别以下问题:
+1. 性能瓶颈
+2. 代码重复
+3. 可读性问题
+4. 潜在的 bug
+5. 不符合最佳实践的地方
+
+如果代码已经足够优秀,请输出 "EXIT: 代码质量已达到标准" 来结束优化流程。`,
+ Model: newChatModel(),
+ })
+ if err != nil {
+ log.Fatal(err)
+ }
+ return a
+}
+
+// 代码优化 Agent
+func NewCodeOptimizerAgent() adk.Agent {
+ a, err := adk.NewChatModelAgent(context.Background(), &adk.ChatModelAgentConfig{
+ Name: "CodeOptimizer",
+ Description: "根据分析结果优化代码",
+ Instruction: `基于前面的代码分析结果,对代码进行优化改进:
+1. 修复识别出的性能问题
+2. 消除代码重复
+3. 提高代码可读性
+4. 修复潜在 bug
+5. 应用最佳实践
+
+请提供优化后的完整代码。`,
+ Model: newChatModel(),
+ })
+ if err != nil {
+ log.Fatal(err)
+ }
+ return a
+}
+
+// 创建一个特殊的 Agent 来处理退出逻辑
+func NewExitControllerAgent() adk.Agent {
+ a, err := adk.NewChatModelAgent(context.Background(), &adk.ChatModelAgentConfig{
+ Name: "ExitController",
+ Description: "控制优化循环的退出",
+ Instruction: `检查前面的分析结果,如果代码分析师认为代码质量已达到标准(包含"EXIT"关键词),
+则输出 "TERMINATE" 并生成退出动作来结束循环。否则继续下一轮优化。`,
+ Model: newChatModel(),
+ })
+ if err != nil {
+ log.Fatal(err)
+ }
+ return a
+}
+
+func main() {
+ ctx := context.Background()
+
+ // 创建优化流程的 Agent
+ analyzer := NewCodeAnalyzerAgent()
+ optimizer := NewCodeOptimizerAgent()
+ controller := NewExitControllerAgent()
+
+ // 创建 LoopAgent,最多执行 5 轮优化
+ loopAgent, err := adk.NewLoopAgent(ctx, &adk.LoopAgentConfig{
+ Name: "CodeOptimizationLoop",
+ Description: "代码优化循环:分析 → 优化 → 检查退出条件",
+ SubAgents: []adk.Agent{analyzer, optimizer, controller},
+ MaxIterations: 5, // 最多 5 轮优化
+ })
+ if err != nil {
+ log.Fatal(err)
+ }
+
+ // 创建 Runner
+ runner := adk.NewRunner(ctx, adk.RunnerConfig{
+ Agent: loopAgent,
+ })
+
+ // 待优化的代码示例
+ codeToOptimize := `
+func processData(data []int) []int {
+ result := []int{}
+ for i := 0; i < len(data); i++ {
+ for j := 0; j < len(data); j++ {
+ if data[i] > data[j] {
+ result = append(result, data[i])
+ break
+ }
+ }
+ }
+ return result
+}
+`
+
+ fmt.Println("开始代码优化循环...")
+ iter := runner.Query(ctx, "请优化以下 Go 代码:\n"+codeToOptimize)
+
+ iteration := 1
+ for {
+ event, ok := iter.Next()
+ if !ok {
+ break
+ }
+
+ if event.Err != nil {
+ log.Fatal(event.Err)
+ }
+
+ if event.Output != nil && event.Output.MessageOutput != nil {
+ fmt.Printf("\n=== 第 %d 轮 - %s ===\n", iteration, event.AgentName)
+ fmt.Printf("%s\n", event.Output.MessageOutput.Message.Content)
+
+ // 检查是否需要退出
+ if event.AgentName == "ExitController" {
+ if event.Action != nil && event.Action.Exit {
+ fmt.Println("\n优化循环提前结束!")
+ break
+ }
+ iteration++
+ }
+ }
+ }
+
+ fmt.Println("\n代码优化循环执行完成!")
+}
+```
+
+运行结果为:
+
+```go
+开始代码优化循环...
+
+=== 第 1 轮 - CodeAnalyzer ===
+分析提供的代码:
+
+func processData(data []int) []int {
+ result := []int{}
+ for i := 0; i < len(data); i++ {
+ for j := 0; j < len(data); j++ {
+ if data[i] > data[j] {
+ result = append(result, data[i])
+ break
+ }
+ }
+ }
+ return result
+}
+
+
+### 1. 性能瓶颈
+
+- 双层循环,时间复杂度为 O(n²),对于较大的数据量,性能不佳。
+- 内层循环当条件满足时立即 break,减少了部分不必要的比较,但整体仍然是二次复杂度。
+
+### 2. 代码重复
+
+- 代码没有明显重复,但逻辑可以简化。
+
+### 3. 可读性问题
+
+- 代码的意图不太明确。根据代码逻辑,函数筛选出那些在数组里至少有一个元素比它小的元素。换句话说,剔除了数组里所有等于或最小的数字。
+- 可以通过注释或重命名来增加可读性。
+
+### 4. 潜在的 bug
+
+- 目前没有明显的 bug,但输入为空数组时,返回空数组符合预期。
+
+### 5. 不符合最佳实践的地方
+
+- 过程较为低效,未使用 Go 语言的特性优化性能。
+- 缺少注释或文档说明。
+
+---
+
+## 优化建议
+
+如果目的确实是要返回所有比数组中某些元素大的元素(排除数组中的最小元素),可以改为:
+
+- 找出数组中的最小元素 minVal。
+- 遍历数组,将大于 minVal 的元素加入结果。
+- 这样时间复杂度降为 O(n)。
+
+优化代码:
+
+func processData(data []int) []int {
+ if len(data) == 0 {
+ return nil
+ }
+
+ minVal := data[0]
+ for _, v := range data[1:] {
+ if v < minVal {
+ minVal = v
+ }
+ }
+
+ result := make([]int, 0, len(data))
+ for _, v := range data {
+ if v > minVal {
+ result = append(result, v)
+ }
+ }
+
+ return result
+}
+
+### 优化点说明
+
+- 使用一次遍历确定最小值,O(n)。
+- 再次遍历构造结果,O(n)。
+- 适当使用切片容量初始化,减少内存分配。
+- 代码意图明确且性能提升明显。
+
+---
+
+如果你期望的是其他逻辑,请告知我进一步调整。
+
+=== 第 1 轮 - CodeOptimizer ===
+
+// processData 返回输入切片中所有大于最小元素的值。
+// 如果输入为空,则返回 nil。
+func processData(data []int) []int {
+ if len(data) == 0 {
+ return nil
+ }
+
+ // 找出切片中的最小值
+ minVal := data[0]
+ for _, v := range data[1:] {
+ if v < minVal {
+ minVal = v
+ }
+ }
+
+ // 收集所有大于最小值的元素
+ result := make([]int, 0, len(data))
+ for _, v := range data {
+ if v > minVal {
+ result = append(result, v)
+ }
+ }
+
+ return result
+}
+
+
+=== 第 1 轮 - ExitController ===
+TERMINATE
+
+=== 第 2 轮 - CodeAnalyzer ===
+EXIT: 代码质量已达到标准
+
+...中间轮次省略...
+
+=== 第 5 轮 - CodeAnalyzer ===
+EXIT: 代码质量已达到标准
+
+=== 第 5 轮 - CodeOptimizer ===
+
+
+// processData 返回输入切片中所有大于最小元素的值。
+// 如果输入为空,则返回 nil。
+func processData(data []int) []int {
+ if len(data) == 0 {
+ return nil
+ }
+
+ minVal := data[0]
+ for _, v := range data[1:] {
+ if v < minVal {
+ minVal = v
+ }
+ }
+
+ result := make([]int, 0, len(data))
+ for _, v := range data {
+ if v > minVal {
+ result = append(result, v)
+ }
+ }
+
+ return result
+}
+
+
+=== 第 5 轮 - ExitController ===
+TERMINATE
+
+代码优化循环执行完成!
+
+```
+
+
+
+## ParallelAgent
+
+
+### 功能
+
+
+ParallelAgent 允许多个子 Agent 基于相同的输入上下文并发执行,所有子 Agent 同时开始执行,并等待全部完成后结束。这种模式特别适用于可以独立并行处理的任务,能够显著提高执行效率。
+
+
+
+
+
+```go
+type ParallelAgentConfig struct {
+ Name string // Agent 名称
+ Description string // Agent 描述
+ SubAgents []Agent // 并发执行的子 Agent 列表
+}
+
+func NewParallelAgent(ctx context.Context, config *ParallelAgentConfig) (Agent, error)
+```
+
+ParallelAgent 的执行遵循以下设定:
+
+1. **并发执行**:所有子 Agent 同时启动,在独立的 goroutine 中并行执行
+2. **共享输入**:所有子 Agent 接收相同的初始输入和上下文
+3. **等待与结果聚合**:内部使用 sync.WaitGroup 等待所有子 Agent 执行完成,收集所有子 Agent 的执行结果并按接收顺序输出
+
+另外 Parallel 内部默认包含异常处理机制:
+
+- **Panic 恢复**:每个 goroutine 都有独立的 panic 恢复机制
+- **错误隔离**:单个子 Agent 的错误不会影响其他子 Agent 的执行
+- **中断处理**:支持子 Agent 的中断和恢复机制
+
+ParallelAgent 适用于以下场景:
+
+- **独立任务并行处理**:多个不相关的任务可以同时执行
+- **多角度分析**:从不同角度同时分析同一个问题
+- **性能优化**:通过并行执行减少总体执行时间
+- **多专家咨询**:同时咨询多个专业领域的 Agent
+
+### 示例
+
+示例展示了如何使用 ParallelAgent 同时从四个不同角度分析产品方案:
+
+1. **TechnicalAnalyst**:技术可行性分析
+2. **BusinessAnalyst**:商业价值分析
+3. **UXAnalyst**:用户体验分析
+4. **SecurityAnalyst**:安全风险分析
+
+```go
+package main
+
+import (
+ "context"
+ "fmt"
+ "log"
+ "os"
+ "sync"
+
+ "github.com/cloudwego/eino-ext/components/model/openai"
+ "github.com/cloudwego/eino/adk"
+ "github.com/cloudwego/eino/components/model"
+)
+
+func newChatModel() model.ToolCallingChatModel {
+ cm, err := openai.NewChatModel(context.Background(), &openai.ChatModelConfig{
+ APIKey: os.Getenv("OPENAI_API_KEY"),
+ Model: os.Getenv("OPENAI_MODEL"),
+ })
+ if err != nil {
+ log.Fatal(err)
+ }
+ return cm
+}
+
+// 技术分析 Agent
+func NewTechnicalAnalystAgent() adk.Agent {
+ a, err := adk.NewChatModelAgent(context.Background(), &adk.ChatModelAgentConfig{
+ Name: "TechnicalAnalyst",
+ Description: "从技术角度分析内容",
+ Instruction: `你是一个技术专家。请从技术实现、架构设计、性能优化等技术角度分析提供的内容。
+重点关注:
+1. 技术可行性
+2. 架构合理性
+3. 性能考量
+4. 技术风险
+5. 实现复杂度`,
+ Model: newChatModel(),
+ })
+ if err != nil {
+ log.Fatal(err)
+ }
+ return a
+}
+
+// 商业分析 Agent
+func NewBusinessAnalystAgent() adk.Agent {
+ a, err := adk.NewChatModelAgent(context.Background(), &adk.ChatModelAgentConfig{
+ Name: "BusinessAnalyst",
+ Description: "从商业角度分析内容",
+ Instruction: `你是一个商业分析专家。请从商业价值、市场前景、成本效益等商业角度分析提供的内容。
+重点关注:
+1. 商业价值
+2. 市场需求
+3. 竞争优势
+4. 成本分析
+5. 盈利模式`,
+ Model: newChatModel(),
+ })
+ if err != nil {
+ log.Fatal(err)
+ }
+ return a
+}
+
+// 用户体验分析 Agent
+func NewUXAnalystAgent() adk.Agent {
+ a, err := adk.NewChatModelAgent(context.Background(), &adk.ChatModelAgentConfig{
+ Name: "UXAnalyst",
+ Description: "从用户体验角度分析内容",
+ Instruction: `你是一个用户体验专家。请从用户体验、易用性、用户满意度等角度分析提供的内容。
+重点关注:
+1. 用户友好性
+2. 操作便利性
+3. 学习成本
+4. 用户满意度
+5. 可访问性`,
+ Model: newChatModel(),
+ })
+ if err != nil {
+ log.Fatal(err)
+ }
+ return a
+}
+
+// 安全分析 Agent
+func NewSecurityAnalystAgent() adk.Agent {
+ a, err := adk.NewChatModelAgent(context.Background(), &adk.ChatModelAgentConfig{
+ Name: "SecurityAnalyst",
+ Description: "从安全角度分析内容",
+ Instruction: `你是一个安全专家。请从信息安全、数据保护、隐私合规等安全角度分析提供的内容。
+重点关注:
+1. 数据安全
+2. 隐私保护
+3. 访问控制
+4. 安全漏洞
+5. 合规要求`,
+ Model: newChatModel(),
+ })
+ if err != nil {
+ log.Fatal(err)
+ }
+ return a
+}
+
+func main() {
+ ctx := context.Background()
+
+ // 创建四个不同角度的分析 Agent
+ techAnalyst := NewTechnicalAnalystAgent()
+ bizAnalyst := NewBusinessAnalystAgent()
+ uxAnalyst := NewUXAnalystAgent()
+ secAnalyst := NewSecurityAnalystAgent()
+
+ // 创建 ParallelAgent,同时进行多角度分析
+ parallelAgent, err := adk.NewParallelAgent(ctx, &adk.ParallelAgentConfig{
+ Name: "MultiPerspectiveAnalyzer",
+ Description: "多角度并行分析:技术 + 商业 + 用户体验 + 安全",
+ SubAgents: []adk.Agent{techAnalyst, bizAnalyst, uxAnalyst, secAnalyst},
+ })
+ if err != nil {
+ log.Fatal(err)
+ }
+
+ // 创建 Runner
+ runner := adk.NewRunner(ctx, adk.RunnerConfig{
+ Agent: parallelAgent,
+ })
+
+ // 要分析的产品方案
+ productProposal := `
+产品方案:智能客服系统
+
+概述:开发一个基于大语言模型的智能客服系统,能够自动回答用户问题,处理常见业务咨询,并在必要时转接人工客服。
+
+主要功能:
+1. 自然语言理解和回复
+2. 多轮对话管理
+3. 知识库集成
+4. 情感分析
+5. 人工客服转接
+6. 对话历史记录
+7. 多渠道接入(网页、微信、APP)
+
+技术架构:
+- 前端:React + TypeScript
+- 后端:Go + Gin 框架
+- 数据库:PostgreSQL + Redis
+- AI模型:GPT-4 API
+- 部署:Docker + Kubernetes
+`
+
+ fmt.Println("开始多角度并行分析...")
+ iter := runner.Query(ctx, "请分析以下产品方案:\n"+productProposal)
+
+ // 使用 map 来收集不同分析师的结果
+ results := make(map[string]string)
+ var mu sync.Mutex
+
+ for {
+ event, ok := iter.Next()
+ if !ok {
+ break
+ }
+
+ if event.Err != nil {
+ log.Printf("分析过程中出现错误: %v", event.Err)
+ continue
+ }
+
+ if event.Output != nil && event.Output.MessageOutput != nil {
+ mu.Lock()
+ results[event.AgentName] = event.Output.MessageOutput.Message.Content
+ mu.Unlock()
+
+ fmt.Printf("\n=== %s 分析完成 ===\n", event.AgentName)
+ }
+ }
+
+ // 输出所有分析结果
+ fmt.Println("\n" + "============================================================")
+ fmt.Println("多角度分析结果汇总")
+ fmt.Println("============================================================")
+
+ analysisOrder := []string{"TechnicalAnalyst", "BusinessAnalyst", "UXAnalyst", "SecurityAnalyst"}
+ analysisNames := map[string]string{
+ "TechnicalAnalyst": "技术分析",
+ "BusinessAnalyst": "商业分析",
+ "UXAnalyst": "用户体验分析",
+ "SecurityAnalyst": "安全分析",
+ }
+
+ for _, agentName := range analysisOrder {
+ if result, exists := results[agentName]; exists {
+ fmt.Printf("\n【%s】\n", analysisNames[agentName])
+ fmt.Printf("%s\n", result)
+ fmt.Println("----------------------------------------")
+ }
+ }
+
+ fmt.Println("\n多角度并行分析完成!")
+ fmt.Printf("共收到 %d 个分析结果\n", len(results))
+}
+```
+
+运行结果为:
+
+```markdown
+开始多角度并行分析...
+
+=== BusinessAnalyst 分析完成 ===
+
+=== UXAnalyst 分析完成 ===
+
+=== SecurityAnalyst 分析完成 ===
+
+=== TechnicalAnalyst 分析完成 ===
+
+============================================================
+多角度分析结果汇总
+============================================================
+
+【技术分析】
+针对该智能客服系统方案,下面从技术实现、架构设计及性能优化等角度进行详细分析:
+
+---
+
+### 一、技术可行性
+
+1. **自然语言理解和回复**
+ - 利用 GPT-4 API 实现自然语言理解和自动回复是当前成熟且可行的方案。GPT-4具备强大的语言理解和生成能力,适合处理复杂、多样的问题。
+
+2. **多轮对话管理**
+ - 依赖后端维护上下文状态,结合GPT-4模型能够较好处理多轮交互。需要设计合理的上下文管理机制(例如对话历史维护、关键槽位抽取等),确保上下文信息完整性。
+
+3. **知识库集成**
+ - 可通过向GPT-4 API添加特定的知识库检索结果(检索增强生成),或者通过本地检索接口集成知识库。技术上可行,但对于实时性和准确性有较高要求。
+
+4. **情感分析**
+ - 情感分析功能可以用独立的轻量模型实现(例如基于BERT微调),也可尝试利用GPT-4输出,但成本较高。情感分析能力帮助智能客服更好地理解用户情绪,提升用户体验。
+
+5. **人工客服转接**
+ - 技术上通过建立事件触发规则(如轮次数、情绪阈值、关键词检测)实现自动转人工。系统需支持工单或会话传递机制,并保障会话无缝切换。
+
+6. **多渠道接入**
+ - 网页、微信、App等多渠道接入均可通过统一API网关实现,技术成熟,同时需要处理渠道差异性(消息格式、认证、推送机制等)。
+
+---
+
+### 二、架构合理性
+
+- **前端 React + TypeScript**
+ 非常适合搭建响应式客服界面,生态成熟,方便多渠道共享组件。
+
+- **后端 Go + Gin**
+ Go语言性能优异,Gin框架轻量且性能高,适合高并发场景。后端承担对接 GPT-4 API、管理状态、多渠道消息转发等职责,选择合理。
+
+- **数据库 PostgreSQL + Redis**
+ - PostgreSQL 负责存储结构化数据,如用户信息、对话历史、知识库元数据。
+ - Redis 负责缓存会话状态、热点知识库、限流等,提升访问性能。
+ 架构设计符合常见大型互联网产品模式,组件分工明确。
+
+- **AI模型 GPT-4 API**
+ 使用成熟API降低开发难度和模型维护成本;缺点是对网络和API调用依赖度高。
+
+- **部署 Docker + Kubernetes**
+ 容器化和K8s编排能保证系统弹性伸缩、高可用和灰度发布,适合生产环境,符合现代微服务架构趋势。
+
+---
+
+### 三、性能考量
+
+1. **响应时间**
+ - GPT-4 API调用本身有一定延迟(通常几百毫秒到1秒不等),对响应时间影响较大。需要做好接口异步处理与前端体验设计(如加载动画、部分渐进响应)。
+
+2. **并发处理能力**
+ - 后端Go具有高并发处理优势,配合Redis缓存热点数据,能大幅提升整体吞吐能力。
+ - 但GPT-4 API调用受限于OpenAI服务的QPS限制与调用成本,需合理设计调用频率与降级策略。
+
+3. **缓存策略**
+ - 对用户对话上下文和常见问题答案进行缓存,减少重复API调用。
+ - 如关键问题先做本地匹配,失败后才调用GPT-4,提升效率。
+
+4. **多渠道负载均衡**
+ - 需要设计统一消息总线和可靠的异步队列,防止某渠道流量突增影响整体系统稳定。
+
+---
+
+### 四、技术风险
+
+1. **GPT-4 API依赖**
+ - 高度依赖第三方API,风险包括服务中断、接口变更及成本波动。
+ - 建议设计本地缓存和有限的替代回答逻辑以应对API异常。
+
+2. **多轮对话上下文管理难度**
+ - 上下文过长或复杂会导致回答质量降低,需要设计限制上下文长度、选择性保留重要信息机制。
+
+3. **知识库集成复杂度**
+ - 如何做到知识库与
+----------------------------------------
+
+【商业分析】
+以下是对智能客服系统产品方案的商业角度分析:
+
+1. 商业价值
+- 提升客户服务效率:自动解答用户问题和常见咨询,减少人工客服压力,降低用人成本。
+- 提升用户体验:多轮对话和情感分析使交互更自然,增强客户满意度和粘性。
+- 数据驱动决策支持:对话历史与知识库集成为企业提供宝贵的用户反馈和行为数据,优化产品和服务。
+- 支持业务扩展:多渠道接入(网页、微信、APP)满足不同客户接入习惯,提升覆盖率。
+
+2. 市场需求
+- 市场对智能客服的需求持续增长,特别是在电商、金融、医疗、教育等行业,客户服务自动化是企业数字化转型的重要方向。
+- 随着AI技术的成熟,企业期望借助大语言模型提升客服智能化水平。
+- 用户对即时响应、全天候服务的需求增加,推动智能客服系统的广泛采用。
+
+3. 竞争优势
+- 采用先进的GPT-4大语言模型,拥有较强的自然语言理解与生成能力,提升问答准确率和对话自然度。
+- 情感分析功能有助于精准识别用户情绪,动态调整回复策略,提高客户满意度。
+- 多渠道接入设计满足企业多元化客户触达需求,增强产品适用性。
+- 技术架构采用微服务、容器化部署,便于弹性扩展和维护,提升系统稳定性和扩展能力。
+
+4. 成本分析
+- AI模型调用成本较高,依赖GPT-4 API,需根据调用量和响应速度调整预算。
+- 技术研发投入较大,涉及前后端、多渠道融合、AI和知识库管理。
+- 运维和服务器成本需考虑多渠道并发访问。
+- 长期来看,人工客服人数可显著减少,节省人力成本。
+- 可通过云服务降低硬件初期投入,但云资源使用需精细管理以控制费用。
+
+5. 盈利模式
+- SaaS订阅服务:按月/年向企业客户收取服务费,基于接入渠道数、并发量和功能级别分层定价。
+- 按调用次数或对话数收费,适合业务波动较大的客户。
+- 增值服务:数据分析报告定制、行业知识库集成、人工客服协同工具等收费。
+- 中大型客户可提供定制开发和技术支持,收取项目费用。
+- 通过持续优化模型和服务,增加客户留存和续费率。
+
+综上,该智能客服系统基于成熟技术与AI优势,具备良好的商业价值和市场潜力。其多渠道接入和情感分析等功能增强竞争力,但需合理控制AI调用成本和运营费用。建议重点推进SaaS订阅和增值服务,结合市场推广,快速占领客户资源,提升盈利能力。
+----------------------------------------
+
+【用户体验分析】
+针对该智能客服系统方案,我将从用户体验、易用性、用户满意度及可访问性等角度进行分析:
+
+1. 用户友好性
+- 自然语言理解和回复能力提升了用户与系统的沟通体验,使用户能够用自然话语表达需求,降低交流障碍。
+- 多轮对话管理允许系统理解上下文,减少重复解释,增强对话连贯性,进一步提升用户体验。
+- 情感分析功能有助于系统识别用户情绪,做出更贴心的回应,提高互动的个性化和人性化。
+- 多渠道接入覆盖用户常用的访问途径,方便用户随时随地获取服务,提升友好度。
+
+2. 操作便利性
+- 自动回答常见业务咨询能够减轻用户等待时间和操作负担,提高响应速度。
+- 人工客服转接机制确保复杂问题可被及时处理,保障服务连续性和操作的无缝衔接。
+- 对话历史记录方便用户回顾咨询内容,避免重复查询,提升操作便利。
+- 使用现代技术栈(React、TypeScript)为前端交互提供良好性能和响应速度,间接增强操作流畅性。
+
+3. 学习成本
+- 基于自然语言处理,用户无需学习特殊指令,降低使用门槛。
+- 多轮对话自然衔接,让用户更易理解系统响应逻辑,减少迷惑和挫败感。
+- 不同渠道的一致性界面(如在网页和微信中保持类似体验)有助于用户迅速上手。
+- 通过情感分析提供的更精准反馈,减少用户因误解而频繁尝试的时间成本。
+
+4. 用户满意度
+- 快速准确的自动回复和多轮对话减少用户等待和重复输入,提升满意度。
+- 情感分析让系统更懂用户情绪,带来更温暖的交互体验,增加用户粘性。
+- 人工客服介入保障复杂问题得到妥善处理,提高服务质量感知。
+- 多渠道覆盖满足不同用户的使用场景,增强整体满意度。
+
+5. 可访问性
+- 多渠道接入覆盖网页、微信、APP,适应不同用户的设备和环境,提升可访问性。
+- 方案未明确提及无障碍设计(如屏幕阅读器兼容、高对比度模式等),这可能是未来需要补充的部分。
+- 前端采用React和TypeScript,有利于实现响应式设计和无障碍功能,但需确保开发规范落地。
+- 后端架构和部署方案保证系统的稳定性和扩展性,间接提升用户持续可访问性。
+
+总结:
+该智能客服系统方案在用户体验和易用性方面考虑较为充分,利用大语言模型实现自然多轮对话、情感分析和知识库集成,满足用户多样化需求。同时,多渠道接入增强了系统的覆盖能力。建议在具体落地时,强化无障碍设计,实现更全面的可访问性保障,同时继续优化对话策略以提升用户满意度。
+----------------------------------------
+
+【安全分析】
+针对该智能客服系统方案,结合信息安全、数据保护及隐私合规等方面,展开如下分析:
+
+一、数据安全
+
+1. 数据传输安全
+- 建议系统所有客户端与服务器间通信均采用TLS/SSL加密,保障数据在传输过程中的机密性与完整性。
+- 由于支持多渠道接入(网页、微信、APP),需确保每个入口均严格实施加密传输。
+
+2. 数据存储安全
+- PostgreSQL存储对话历史、用户资料等敏感信息,需启用数据库加密(如透明数据加密TDE或字段级加密),防止数据泄露。
+- Redis作为缓存,可能存储临时会话数据,也需开启访问认证与加密传输。
+- 对用户敏感数据实行最小存储原则,避免无关数据超范围保存。
+- 数据备份过程中需加密保存,且备份访问同样受控。
+
+3. API调用安全
+- GPT-4 API调用产生大量用户数据交互,应评估其数据处理及存储政策,确保符合数据安全要求。
+- 增加调用权限管理,限制API密钥访问范围和权限,避免被滥用。
+
+4. 日志安全
+- 系统日志中避免存储明文敏感信息,尤其是个人身份信息、对话内容。日志访问需严格控制。
+
+二、隐私保护
+
+1. 个人数据处理
+- 采集和存储用户个人数据(姓名、联系方式、账务信息等)必须明确告知用户,并征得用户同意。
+- 实施数据匿名化/去标识化技术,尤其是对话历史中的身份信息处理。
+
+2. 用户隐私权利
+- 满足相关法律法规(例如《个人信息保护法》、《GDPR》)中用户的访问、更正、删除数据的权利。
+- 提供隐私政策明确披露数据收集、使用和共享情况。
+
+3. 交互隐私
+- 多轮对话和情感分析等功能应考虑避免过度侵犯用户隐私,例如敏感情绪数据的使用透明告知和限制。
+
+4. 第三方合规
+- GPT-4 API由第三方提供,需确保其服务符合相关隐私合规要求及数据保护标准。
+
+三、访问控制
+
+1. 用户身份验证
+- 系统中涉及用户身份信息查询和管理时,需建立可靠的身份认证机制。
+- 支持多因素认证增强安全性。
+
+2. 权限管理
+- 后端管理接口及人工客服转接模块需采用基于角色的访问控制(RBAC),确保操作权限最小化。
+- 对访问敏感数据的操作需有详细审计和监控。
+
+3. 会话管理
+- 对多渠道的会话要有有效的会话管理机制,防止会话劫持。
+- 对话历史访问权限应限制仅允许相关用户或授权人员访问。
+
+四、安全漏洞
+
+1. 应用安全
+- 前端React+TypeScript应防止XSS、CSRF攻击,合理使用Content Security Policy(CSP)。
+- 后端Go应用需防止SQL注入、请求伪造和权限缺失。Gin框架提供中间件支持,建议充分利用安全模块。
+
+2. AI模型风险
+- GPT-4 API本身输入输出可能存在敏感信息泄露或模型误用风险,需限制输入内容、过滤敏感信息。
+- 防止生成恶意回答或信息泄露,建立内容审核机制。
+
+3. 容器和部署安全
+- Docker容器须采用安全镜像,及时打补丁。Kubernetes集群网络策略和访问控制需完善。
+- 容器运行权限最小化,避免容器逃逸风险。
+
+五、合规要求
+
+1. 数据保护法规
+- 根据运营地域,需符合《个人信息保护法》(PIPL)、《欧盟通用数据保护条例》(GDPR)或其他相关法律要求。
+- 明确用户数据的采集、处理、传输和存储流程符合法规。
+
+2. 用户隐私告知及同意
+- 应提供清晰的隐私政策和使用条款,说明数据用途及处理方式。
+- 实现用户同意管理(Consent Management)机制。
+
+3. 数据跨境传输合规
+- 若系统涉及跨境数据流,需评估合规风险和采取相应技术
+----------------------------------------
+
+多角度并行分析完成!
+共收到 4 个分析结果
+```
+
+## 总结
+
+Workflow Agents 为 Eino ADK 提供了强大的多 Agent 协作能力,通过合理选择和组合这些 Workflow Agent,开发者可以构建出高效、可靠的多 Agent 协作系统,满足各种复杂的业务需求。
\ No newline at end of file
diff --git a/content/zh/docs/eino/core_modules/eino_adk/agent_interface.md b/content/zh/docs/eino/core_modules/eino_adk/agent_interface.md
new file mode 100644
index 00000000000..1569cb777f9
--- /dev/null
+++ b/content/zh/docs/eino/core_modules/eino_adk/agent_interface.md
@@ -0,0 +1,331 @@
+---
+Description: ""
+date: "2025-09-30"
+lastmod: ""
+tags: []
+title: 'Eino ADK: Agent 抽象'
+weight: 3
+---
+
+## Agent 定义
+
+Eino 定义了 Agent 的基础接口,实现此接口的 Struct 可被视为一个 Agent:
+
+```go
+// github.com/cloudwego/eino/adk/interface.go
+
+type Agent interface {
+ Name(ctx context.Context) string
+ Description(ctx context.Context) string
+ Run(ctx context.Context, input *AgentInput, opts ...AgentRunOption) *AsyncIterator[*AgentEvent]
+}
+```
+
+| Method | 说明 |
| Name | Agent 的名称,作为 Agent 的标识 |
| Description | Agent 的职能描述信息,主要用于让其他的 Agent 了解和判断该 Agent 的职责或功能 |
| Run | Agent 的核心执行方法,返回一个迭代器,调用者可以通过这个迭代器持续接收 Agent 产生的事件 |
+
+### AgentRunOption
+
+`AgentRunOption` 由 Agent 实现定义,可以在请求维度修改 Agent 配置或者控制 Agent 行为。
+
+Eino ADK 提供了一些通用定义的 Option,供用户使用:
+
+- `WithSessionValues`:设置跨 Agent 读写数据
+- `WithSkipTransferMessages`:配置后,当 Event 为 Transfer SubAgent 时,Event 中的消息不会追加到 History 中
+
+Eino ADK 提供了 `WrapImplSpecificOptFn` 和 `GetImplSpecificOptions` 两个方法,供 Agent 包装与读取自定义的 `AgentRunOption`。
+
+当使用 `GetImplSpecificOptions` 方法读取 `AgentRunOptions` 时,与所需类型(如例子中的 options)不符的 AgentRunOption 会被忽略。
+
+例如可以定义 `WithModelName`,在请求维度要求 Agent 修改调用的模型:
+
+```go
+// github.com/cloudwego/eino/adk/call_option.go
+// func WrapImplSpecificOptFn[T any](optFn func(*T)) AgentRunOption
+// func GetImplSpecificOptions[T any](base *T, opts ...AgentRunOption) *T
+
+import "github.com/cloudwego/eino/adk"
+
+type options struct {
+ modelName string
+}
+
+func WithModelName(name string) adk.AgentRunOption {
+ return adk.WrapImplSpecificOptFn(func(t *options) {
+ t.modelName = name
+ })
+}
+
+func (m *MyAgent) Run(ctx context.Context, input *adk.AgentInput, opts ...adk.AgentRunOption) *adk.AsyncIterator[*adk.AgentEvent] {
+ o := &options{}
+ o = adk.GetImplSpecificOptions(o, opts...)
+ // run code...
+}
+```
+
+除此之外,AgentRunOption 具有一个 `DesignateAgent` 方法,调用该方法可以在调用多 Agent 系统时指定 Option 生效的 Agent:
+
+```go
+func genOpt() {
+ // 指定 option 仅对 agent_1 和 agent_2 生效
+ opt := adk.WithSessionValues(map[string]any{}).DesignateAgent("agent_1", "agent_2")
+}
+```
+
+### AsyncIterator
+
+`Agent.Run` 返回了一个迭代器 `AsyncIterator[*AgentEvent]`:
+
+```go
+// github.com/cloudwego/eino/adk/utils.go
+
+type AsyncIterator[T any] struct {
+ ...
+}
+
+func (ai *AsyncIterator[T]) Next() (T, bool) {
+ ...
+}
+```
+
+它代表一个异步迭代器(异步指生产与消费之间没有同步控制),允许调用者以一种有序、阻塞的方式消费 Agent 在运行过程中产生的一系列事件。
+
+- `AsyncIterator` 是一个泛型结构体,可以用于迭代任何类型的数据。当前在 Agent 接口中, Run 方法返回的迭代器类型被固定为 `AsyncIterator[*AgentEvent]` 。这意味着,你从这个迭代器中获取的每一个元素,都将是一个指向 `AgentEvent` 对象的指针。`AgentEvent` 会在后续章节中详细说明。
+- 迭代器的主要交互方式是通过调用其 `Next()` 方法。这个方法的行为是 阻塞式 的,每次调用 `Next()` ,程序会暂停执行,直到以下两种情况之一发生:
+ - Agent 产生了一个新的 `AgentEvent` : `Next()` 方法会返回这个事件,调用者可以立即对其进行处理。
+ - Agent 主动关闭了迭代器 : 当 Agent 不会再产生任何新的事件时(通常是 Agent 运行结束),它会关闭这个迭代器。此时 `Next()` 调用会结束阻塞并在第二个返回值返回 false,告知调用者迭代已经结束。
+
+通常情况下,你需要使用 for 循环处理 `AsyncIterator`:
+
+```go
+iter := myAgent.Run(xxx) // get AsyncIterator from Agent.Run
+
+for {
+ event, ok := iter.Next()
+ if !ok {
+ break
+ }
+ // handle event
+}
+```
+
+`AsyncIterator` 可以由 `NewAsyncIteratorPair` 创建,该函数返回的另一个参数 `AsyncGenerator` 用来生产数据:
+
+```go
+// github.com/cloudwego/eino/adk/utils.go
+
+func NewAsyncIteratorPair[T any]() (*AsyncIterator[T], *AsyncGenerator[T])
+```
+
+Agent.Run 返回 AsyncIterator 旨在让调用者实时地接收到 Agent 产生的一系列 AgentEvent,因此 Agent.Run 通常会在 Goroutine 中运行 Agent 从而立刻返回 AsyncIterator 供调用者监听:
+
+```go
+import "github.com/cloudwego/eino/adk"
+
+func (m *MyAgent) Run(ctx context.Context, input *adk.AgentInput, opts ...adk.AgentRunOption) *adk.AsyncIterator[*adk.AgentEvent] {
+ // handle input
+ iter, gen := adk.NewAsyncIteratorPair[*adk.AgentEvent]()
+ go func() {
+ defer func() {
+ // recover code
+ gen.Close()
+ }()
+ // agent run code
+ // gen.Send(event)
+ }()
+ return iter
+}
+```
+
+### AgentWithOptions
+
+使用 `AgentWithOptions` 方法可以在 Eino ADK Agent 中进行一些通用配置。
+
+与 `AgentRunOption` 不同的是,`AgentWithOptions` 在运行前生效,并且不支持自定义 option。
+
+```go
+// github.com/cloudwego/eino/adk/flow.go
+func AgentWithOptions(ctx context.Context, agent Agent, opts ...AgentOption) Agent
+```
+
+Eino ADK 当前内置支持的配置有:
+
+- `WithDisallowTransferToParent`:配置该 SubAgent 不允许 Transfer 到 ParentAgent,会触发该 SubAgent 的 `OnDisallowTransferToParent` 回调方法
+- `WithHistoryRewriter`:配置后该 Agent 在执行前会通过该方法重写接收到的上下文信息
+
+## AgentEvent
+
+AgentEvent 是 Agent 在其运行过程中产生的核心事件数据结构。其中包含了 Agent 的元信息、输出、行为和报错:
+
+```go
+// github.com/cloudwego/eino/adk/interface.go
+
+type AgentEvent struct {
+ AgentName string
+
+ RunPath []RunStep
+
+ Output *AgentOutput
+
+ Action *AgentAction
+
+ Err error
+}
+
+// EventFromMessage 构建普通 event
+func EventFromMessage(msg Message, msgStream MessageStream, role schema.RoleType, toolName string) *AgentEvent
+```
+
+### AgentName & RunPath
+
+`AgentName` 和 `RunPath` 字段是由框架自动进行填充,它们提供了关于事件来源的重要上下文信息,在复杂的、由多个 Agent 构成的系统中至关重要。
+
+```go
+type RunStep struct {
+ agentName string
+}
+```
+
+- `AgentName` 标明了是哪一个 Agent 实例产生了当前的 AgentEvent 。
+- `RunPath` 记录了到达当前 Agent 的完整调用链路。`RunPath` 是一个 `RunStep` 切片,它按顺序记录了从最初的入口 Agent 到当前产生事件的 Agent 的所有 `AgentName`。
+
+### AgentOutput
+
+`AgentOutput` 封装了 Agent 产生的输出。
+
+Message 输出设置在 MessageOutput 字段中,其他类型的自定义输出设置在 CustomizedOutput 字段中:
+
+```go
+// github.com/cloudwego/eino/adk/interface.go
+
+type AgentOutput struct {
+ MessageOutput *MessageVariant
+
+ CustomizedOutput any
+}
+
+type MessageVariant struct {
+ IsStreaming bool
+
+ Message Message
+ MessageStream MessageStream
+ // message role: Assistant or Tool
+ Role schema.RoleType
+ // only used when Role is Tool
+ ToolName string
+}
+```
+
+`MessageOutput` 字段的类型 `MessageVariant` 是一个核心数据结构,主要功能为:
+
+1. 统一处理流式与非流式消息:`IsStreaming` 是一个标志位。值为 true 表示当前 `MessageVariant` 包含的是一个流式消息(从 MessageStream 读取),为 false 则表示包含的是一个非流式消息(从 Message 读取):
+
+ - 流式 : 随着时间的推移,逐步返回一系列消息片段,最终构成一个完整的消息(MessageStream)。
+ - 非流式 : 一次性返回一个完整的消息(Message)。
+2. 提供便捷的元数据访问:Message 结构体内部包含了一些重要的元信息,如消息的 Role(Assistant 或 Tool),为了方便快速地识别消息类型和来源, MessageVariant 将这些常用的元数据提升到了顶层:
+
+ - `Role`:消息的角色,Assistant / Tool
+ - `ToolName`:如果消息角色是 Tool ,这个字段会直接提供工具的名称。
+
+这样做的好处是,代码在需要根据消息类型进行路由或决策时, 无需深入解析 Message 对象的具体内容 ,可以直接从 MessageVariant 的顶层字段获取所需信息,从而简化了逻辑,提高了代码的可读性和效率。
+
+### AgentAction
+
+Agent 产生包含 AgentAction 的 Event 可以控制多 Agent 协作,比如立刻退出、中断、跳转等:
+
+```go
+// github.com/cloudwego/eino/adk/interface.go
+
+type AgentAction struct {
+ Exit bool
+
+ Interrupted *InterruptInfo
+
+ TransferToAgent *TransferToAgentAction
+
+ CustomizedAction any
+}
+
+type InterruptInfo struct {
+ Data any
+}
+
+type TransferToAgentAction struct {
+ DestAgentName string
+}
+```
+
+Eino ADK 当前预设 Action 有三种:
+
+1. 退出:当 Agent 产生 Exit Action 时,Multi-Agent 会立刻退出
+
+```go
+func NewExitAction() *AgentAction {
+ return &AgentAction{Exit: true}
+}
+```
+
+2. 跳转:当 Agent 产生 Transfer Action 时,会跳转到目标 Agent 运行
+
+```go
+func NewTransferToAgentAction(destAgentName string) *AgentAction {
+ return &AgentAction{TransferToAgent: &TransferToAgentAction{DestAgentName: destAgentName}}
+}
+```
+
+3. 中断:当 Agent 产生 Interrupt Action 时,会中断 Runner 的运行。由于中断可能发生在任何位置,同时中断时需要向外传递独特的信息,Action 中提供了 `Interrupted` 字段供 Agent 设置自定义数据,Runner 接收到 Interrupted 不为空的 Action 时则认为产生了中断。Interrupt & Resume 内部机制较为复杂,在 【Eino ADK: Agent Runner】-【Eino ADK: Interrupt & Resume】章节会展开详述。
+
+```go
+// 例如 ChatModelAgent 中断时,会发送如下的 AgentEvent:
+h.Send(&AgentEvent{AgentName: h.agentName, Action: &AgentAction{
+ Interrupted: &InterruptInfo{
+ Data: &ChatModelAgentInterruptInfo{Data: data, Info: info},
+ },
+}})
+```
diff --git a/content/zh/docs/eino/core_modules/eino_adk/agent_preview.md b/content/zh/docs/eino/core_modules/eino_adk/agent_preview.md
new file mode 100644
index 00000000000..900d455c078
--- /dev/null
+++ b/content/zh/docs/eino/core_modules/eino_adk/agent_preview.md
@@ -0,0 +1,162 @@
+---
+Description: ""
+date: "2025-09-30"
+lastmod: ""
+tags: []
+title: 'Eino ADK: 概述'
+weight: 2
+---
+
+## 什么是 Eino ADK?
+
+Eino ADK 参考 [Google-ADK](https://google.github.io/adk-docs/agents/) 的设计,提供了 Go 语言 的 Agents 开发的灵活组合框架,即 Agent、Multi-Agent 开发框架。Eino ADK 为多 Agent 交互时,沉淀了通用的 上下文传递、事件流分发和转换、任务控制权转让、中断与恢复、通用切面等能力。 适用场景广泛、模型无关、部署无关,让 Agent、Multi-Agent 开发更加简单、便利,并提供完善的生产级应用的治理能力。
+
+Eino ADK 旨在帮助开发者开发、管理 Agent 应用。提供灵活且鲁棒的开发环境,助力开发者搭建 对话智能体、非对话智能体、复杂任务、工作流等多种多样的 Agent 应用。
+
+## ADK 框架
+
+Eino ADK 的整体模块构成,如下图所示:
+
+
+
+### Agent Interface
+
+Eino ADK 的核心是 Agent 抽象(Agent Interface),ADK 的所有功能设计均围绕 Agent 抽象展开。详解请见 [Eino ADK: Agent 抽象](/zh/docs/eino/core_modules/eino_adk/agent_interface)
+
+```go
+type Agent interface {
+ Name(ctx context.Context) string
+ Description(ctx context.Context) string
+
+ // Run runs the agent.
+ // The returned AgentEvent within the AsyncIterator must be safe to modify.
+ // If the returned AgentEvent within the AsyncIterator contains MessageStream,
+ // the MessageStream MUST be exclusive and safe to be received directly.
+ // NOTE: it's recommended to use SetAutomaticClose() on the MessageStream of AgentEvents emitted by AsyncIterator,
+ // so that even the events are not processed, the MessageStream can still be closed.
+ Run(ctx context.Context, input *AgentInput, options ...AgentRunOption) *AsyncIterator[*AgentEvent]
+}
+```
+
+`Agent.Run` 的定义为:
+
+1. 从入参 AgentInput、AgentRunOption 和可选的 Context Session 中获取任务详情及相关数据
+2. 执行任务,并将执行过程、执行结果写入到 AgentEvent Iterator
+
+`Agent.Run` 要求 Agent 的实现以 Future 模式异步执行,核心分成三步,具体可参考 ChatModelAgent 中 Run 方法的实现:
+
+1. 创建一对 Iterator、Generator
+2. 启动 Agent 的异步任务,并传入 Generator,处理 AgentInput。Agent 在这个异步任务执行核心逻辑(例如 ChatModelAgent 调用 LLM),并在产生新的事件时写入到 Generator 中,供 Agent 调用方在 Iterator 中消费
+3. 启动 2 中的任务后立即返回 Iterator
+
+### 多 Agent 协作
+
+围绕 Agent 抽象,Eino ADK 提供多种简单易用、场景丰富的组合原语,可支撑开发丰富多样的 Multi-Agent 协同策略,比如 Supervisor、Plan-Execute、Group-Chat 等 Multi-Agent 场景。从而实现不同的 Agent 分工合作模式,处理更复杂的任务。详解请见 [Eino ADK: Agent 协作](/zh/docs/eino/core_modules/eino_adk/agent_collaboration)
+
+Eino ADK 定义的 Agent 协作过程中的协作原语如下:
+
+- Agent 间协作方式
+
+| 协助方式 | 描述 |
| Transfer | 直接将任务转让给另外一个 Agent,本 Agent 则执行结束后退出,不关心转让 Agent 的任务执行状态 |
| ToolCall(AgentAsTool) | 将 Agent 当成 ToolCall 调用,等待 Agent 的响应,并可获取被调用Agent 的输出结果,进行下一轮处理 |
| 上下文策略 | 描述 |
| 上游 Agent 全对话 | 获取本 Agent 的上游 Agent 的完整对话记录 |
| 全新任务描述 | 忽略掉上游 Agent 的完整对话记录,给出一个全新的任务总结,作为子 Agent 的 AgentInput 输入 |
| 决策自主性 | 描述 |
| 自主决策 | 在 Agent 内部,基于其可选的下游 Agent, 如需协助时,自主选择下游 Agent 进行协助。 一般来说,Agent 内部是基于 LLM 进行决策,不过即使是基于预设逻辑进行选择,从 Agent 外部看依然视为自主决策 |
| 预设决策 | 事先预设好一个Agent 执行任务后的下一个 Agent。 Agent 的执行顺序是事先确定、可预测的 |
| 类别 | ChatModel Agent | Workflow Agents | Custom Logic | EinoBuiltInAgent(supervisor, plan-execute) |
| 功能 | 思考,生成,工具调用 | 控制 Agent 之间的执行流程 | 运行自定义逻辑 | 开箱即用的 Multi-agent 模式封装 |
| 核心 | LLM | 预确定的执行流程(顺序,并发,循环) | 自定义代码 | 基于 Eino 实践积累的经验,对前三者的高度封装 |
| 用途 | 生成,动态决策 | 结构化处理,编排 | 定制需求 | 特定场景内的开箱即用 |
+
+## ADK Examples
+
+[Eino-examples](https://github.com/cloudwego/eino-examples/tree/main/adk) 项目中提供了多种 ADK 的实施样例,您可以参考样例代码与简介,对 adk 能力构建初步的认知:
+
+| 项目路径 | 简介 | 结构图 |
| 顺序工作流案例 | 该示例代码展示了基于 eino adk 的 Workflow 模式构建的一个顺序执行的多智能体工作流。 | ![]() |
| 循环工作流案例 | 该示例代码基于 eino adk 的 Workflow 模式中的 LoopAgent,构建了一个反思迭代型智能体框架。 | ![]() |
| 并行工作流案例 | 该示例代码基于 eino adk 的 Workflow 模式中的 ParallelAgent,构建了一个并发信息搜集框架: | ![]() |
| supervisor | 该用例采用单层 Supervisor 管理两个功能较为综合的子 Agent:Research Agent 负责检索任务,Math Agent 负责多种数学运算(加、乘、除),但所有数学运算均由同一个 Math Agent 内部统一处理,而非拆分为多个子 Agent。此设计简化了代理层级,适合任务较为集中且不需要过度拆解的场景,便于快速部署和维护。 | ![]() |
| layered-supervisor | 该用例实现了多层级智能体监督体系,顶层 Supervisor 管理 Research Agent 和 Math Agent,Math Agent 又进一步细分为 Subtract、Multiply、Divide 三个子 Agent。顶层 Supervisor 负责将研究任务和数学任务分配给下级 Agent,Math Agent 作为中层监督者再将具体数学运算任务分派给其子 Agent。 | ![]() |
| plan-execute 案例 | 本示例基于 eino adk 实现 plan-execute-replan 模式的多 Agent 旅行规划系统,核心功能是处理用户复杂旅行请求(如 “3 天北京游,需从纽约出发的航班、酒店推荐、必去景点”),通过 “计划 - 执行 - 重新计划” 循环完成任务: 从结构上看,plan-execute-replan 分为两层: | ![]() |
| 书籍推荐 agent(运行中断与恢复) | 该代码展示了基于 eino adk 框架构建的一个书籍推荐聊天智能体实现,体现了 Agent 运行中断与恢复功能。 | ![]() |
diff --git a/content/zh/docs/eino/core_modules/eino_adk/multi_agent/_index.md b/content/zh/docs/eino/core_modules/eino_adk/multi_agent/_index.md
deleted file mode 100644
index ac01a4af5a9..00000000000
--- a/content/zh/docs/eino/core_modules/eino_adk/multi_agent/_index.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-Description: ""
-date: "2025-08-06"
-lastmod: ""
-tags: []
-title: 'Eino ADK: Multi-Agent '
-weight: 5
----
-
-
diff --git a/content/zh/docs/eino/core_modules/eino_adk/multi_agent/plan_executor.md b/content/zh/docs/eino/core_modules/eino_adk/multi_agent/plan_executor.md
deleted file mode 100644
index 904d98cf417..00000000000
--- a/content/zh/docs/eino/core_modules/eino_adk/multi_agent/plan_executor.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-Description: ""
-date: "2025-08-06"
-lastmod: ""
-tags: []
-title: 'Eino ADK: Plan-Executor'
-weight: 3
----
-
-Plan-Executor 是基于 Eino ADK 提供的开箱即用的 multi-agent 封装,TODO
diff --git a/content/zh/docs/eino/core_modules/eino_adk/multi_agent/reflection_agents.md b/content/zh/docs/eino/core_modules/eino_adk/multi_agent/reflection_agents.md
deleted file mode 100644
index f640471b18c..00000000000
--- a/content/zh/docs/eino/core_modules/eino_adk/multi_agent/reflection_agents.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-Description: ""
-date: "2025-08-06"
-lastmod: ""
-tags: []
-title: 'Eino ADK: Reflection Agents'
-weight: 4
----
-
-Reflection Agent 是基于 Eino ADK 提供的示例型的 multi-agent 封装,TODO
diff --git a/content/zh/docs/eino/core_modules/eino_adk/multi_agent/supervisor.md b/content/zh/docs/eino/core_modules/eino_adk/multi_agent/supervisor.md
deleted file mode 100644
index 95df0551c91..00000000000
--- a/content/zh/docs/eino/core_modules/eino_adk/multi_agent/supervisor.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-Description: ""
-date: "2025-08-06"
-lastmod: ""
-tags: []
-title: 'Eino ADK: Supervisor'
-weight: 2
----
-
-Supervisor 是基于 Eino ADK 提供的开箱即用的 multi-agent 封装,TODO
diff --git a/content/zh/docs/eino/core_modules/eino_adk/multi_agent/workflow_agent.md b/content/zh/docs/eino/core_modules/eino_adk/multi_agent/workflow_agent.md
deleted file mode 100644
index 47582a49f6e..00000000000
--- a/content/zh/docs/eino/core_modules/eino_adk/multi_agent/workflow_agent.md
+++ /dev/null
@@ -1,204 +0,0 @@
----
-Description: ""
-date: "2025-08-06"
-lastmod: ""
-tags: []
-title: 'Eino ADK: Workflow Agent'
-weight: 1
----
-
-WorkflowAgent 支持以静态的模式运行多个 Agent。所谓“静态”,是指 Agent 之间的协作流程(如顺序、并行)是在代码中预先定义好的,而不是在运行时由 Agent 动态决定的。Eino ADK 提供了三种基础 Workflow Agent:Sequential、Parallel、Loop,他们之间可以互相嵌套以完成更复杂的任务。
-
-默认情况下,Workflow 中每个 Agent 的输入由 History 章节中介绍的方式生成,可以通过 WithHistoryRewriter 自定 AgentInput 生成方式。
-
-当 Agent 产生 ExitAction Event 后,Workflow Agent 会立刻退出,无论之后有没有其他需要运行的 Agent。
-
-# SequentialAgent
-
-SequentialAgent 会按照你提供的顺序,依次执行一系列 Agent:
-
-
-
-我们通过一个包含两个子 Agent 的 Research Agent 来介绍 SequentialAgent 的用法,其中第一个 Plan Agent 会接收一个研究主题,并生成研究计划;第二个 Write Agent 会接收研究主题与 Plan 产生研究计划(Write Agent 的输入依据 History 章节中介绍的默认方式生成,也可以通过 WithHistoryRewriter 自定义),并撰写报告。
-
-首先创建两个子 Agent ,我们将两个 Agent 简化为仅包含 ChatModel,实践中可以通过为 Agent 增加 Tool 来增强 Agent 的 plan 和 write 能力:
-
-```go
-import (
- "context"
- "log"
- "os"
-
- "github.com/cloudwego/eino-ext/components/model/openai"
- "github.com/cloudwego/eino/adk"
- "github.com/cloudwego/eino/components/model"
-)
-
-func newChatModel() model.ToolCallingChatModel {
- cm, err := openai.NewChatModel(context.Background(), &openai.ChatModelConfig{
- APIKey: os.Getenv("OPENAI_API_KEY"),
- Model: os.Getenv("OPENAI_MODEL"),
- })
- if err != nil {
- log.Fatal(err)
- }
- return cm
-}
-
-func NewPlanAgent() adk.Agent {
- a, err := adk.NewChatModelAgent(context.Background(), &adk.ChatModelAgentConfig{
- Name: "PlannerAgent",
- Description: "Generates a research plan based on a topic.",
- Instruction: `
-You are an expert research planner.
-Your goal is to create a comprehensive, step-by-step research plan for a given topic.
-The plan should be logical, clear, and easy to follow.
-The user will provide the research topic. Your output must ONLY be the research plan itself, without any conversational text, introductions, or summaries.`,
- Model: newChatModel(),
- })
- if err != nil {
- log.Fatal(err)
- }
- return a
-}
-
-func NewWriterAgent() adk.Agent {
- a, err := adk.NewChatModelAgent(context.Background(), &adk.ChatModelAgentConfig{
- Name: "WriterAgent",
- Description: "Writes a report based on a research plan.",
- Instruction: `
-You are an expert academic writer.
-You will be provided with a detailed research plan.
-Your task is to expand on this plan to write a comprehensive, well-structured, and in-depth report.
-The user will provide the research plan. Your output should be the complete final report.`,
- Model: newChatModel(),
- })
- if err != nil {
- log.Fatal(err)
- }
- return a
-}
-```
-
-之后使用 Sequential Agent 编排两个子 Agent:
-
-```go
-import (
- "context"
- "fmt"
- "log"
- "os"
-
- "github.com/cloudwego/eino/adk"
-
- "github.com/cloudwego/eino-examples/adk/intro/workflow/sequential/internal"
-)
-
-func main() {
- ctx := context.Background()
-
- a, err := adk.NewSequentialAgent(ctx, &adk.SequentialAgentConfig{
- Name: "ResearchAgent",
- Description: "A sequential workflow for planning and writing a research report.",
- SubAgents: []adk.Agent{internal.NewPlanAgent(), internal.NewWriterAgent()},
- })
- if err != nil {
- log.Fatal(err)
- }
-
- runner := adk.NewRunner(ctx, adk.RunnerConfig{
- Agent: a,
- })
-
- iter := runner.Query(ctx, "The history of Large Language Models")
- for {
- event, ok := iter.Next()
- if !ok {
- break
- }
- if event.Err != nil {
- fmt.Printf("Error: %v\n", event.Err)
- break
- }
- msg, err := event.Output.MessageOutput.GetMessage()
- if err != nil {
- log.Fatal(err)
- }
- fmt.Printf("Agent[%s]:\n %+v\n\n===========\n\n", event.AgentName, msg)
- }
-}
-```
-
-运行结果如下:
-
-```
-Agent[PlannerAgent]:
- assistant: Step 1: Define the Research Scope
-- Determine the time frame for your historical analysis, starting from the early development of large language models (LLMs) to the present.
-
-......
-
-Step 10: Update and Revise
-- Plan for periodic updates to incorporate new developments in large language models as they arise.
-- Keep abreast of publications and ongoing research in the field to maintain the relevance and accuracy of your research.
-finish_reason: stop
-usage: &{86 675 761}
-
-===========
-
-Agent[WriterAgent]:
- assistant: # The History of Large Language Models
-
-## Introduction
-
-The development of Large Language Models (LLMs) marks a significant milestone in the field of artificial intelligence (AI) and natural language processing (NLP). These models, capable of understanding and generating human-like text, have evolved rapidly over the past few decades, showcasing profound improvements in language comprehension and generation. This report explores the history of LLMs, tracing their evolution from early linguistic theories to the sophisticated models we see today.
-
-## Early Foundations in Linguistics and Computation
-
-......
-
-## Conclusion
-
-The history of Large Language Models is a testament to the rapid evolution of artificial intelligence. From early linguistic theories and basic neural networks to sophisticated models capable of human-like language generation, each milestone has contributed to our current understanding and capabilities. As LLMs continue to advance, their potential to transform industries, improve communication, and enable new technologies remains vast. However, it is crucial that ethical considerations keep pace with technological advances to ensure these models benefit society at large.
-
----
-
-This comprehensive overview of the history of Large Language Models outlines their origins, evolution, and impact, providing a foundation for further exploration and research in this dynamic field.
-finish_reason: stop
-usage: &{74 1066 1140}
-
-===========
-```
-
-# LoopAgent
-
-LoopAgent 基于 SequentialAgent 实现,在 SequentialAgent 运行完成后,再次从头运行:
-
-
-
-Agent 产生 ExitAction Event 时退出 LoopAgent,也可以配置 MaxIteration 来控制最大循环次数。通过 adk.NewLoopAgent 创建:
-
-```
-adk.NewLoopAgent(ctx, &adk.LoopAgentConfig{
- Name: "name",
- Description: "description",
- SubAgents: []adk.Agent{a1,a2},
- MaxIterations: 3,
-})
-```
-
-# ParallelAgent
-
-ParallelAgent 会并发运行若干 Agent:
-
-
-
-通过 adk.NewParallelAgent 创建:
-
-```
-adk.NewParallelAgent(ctx, &adk.ParallelAgentConfig{
- Name: "name",
- Description: "desc",
- SubAgents: []adk.Agent{a1,a2},
-})
-```
diff --git a/content/zh/docs/eino/core_modules/eino_adk/outline.md b/content/zh/docs/eino/core_modules/eino_adk/outline.md
deleted file mode 100644
index 79510a408b0..00000000000
--- a/content/zh/docs/eino/core_modules/eino_adk/outline.md
+++ /dev/null
@@ -1,328 +0,0 @@
----
-Description: ""
-date: "2025-08-06"
-lastmod: ""
-tags: []
-title: 'Eino ADK: 概述'
-weight: 1
----
-
-# 什么是 Eino ADK?
-
-Eino ADK 参考 [Google-ADK](https://google.github.io/adk-docs/agents/) 的设计,提供了 Go 语言 的 Agents 开发的灵活组合框架,即 Agent、Multi-Agent 开发框架。Eino ADK 为多 Agent 交互时,沉淀了通用的 上下文传递、事件流分发和转换、任务控制权转让、中断与恢复、通用切面等能力。 适用场景广泛、模型无关、部署无关,让 Agent、Multi-Agent 开发更加简单、便利,并提供完善的生产级应用的治理能力。
-
-Eino ADK 旨在帮助开发者开发、管理 Agent 应用。提供灵活且鲁棒的开发环境,助力开发者搭建 对话智能体、非对话智能体、复杂任务、工作流等多种多样的 Agent 应用。
-
-# ADK 框架结构
-
-Eino ADK 的整体模块构成,如下图所示:
-
-
-
-## Agent Interface
-
-Eino ADK 的核心成员是 Agent 抽象(Agent Interface),ADK 的所有功能设计均围绕 Agent 抽象展开。
-
-Agent 核心行为抽象大致可描述为:
-
-> 从入参中 AgentInput、AgentRunOption 和 可选的 Context Session,获取任务详情及相关数据
->
-> 执行任务,并将执行过程、执行结果输出到 AgenEvent Iterator
->
-> 执行任务时,可通过 Context 中的 Session 暂存数据
-
-```go
-type Agent interface {
- Name(ctx context.Context) string
- Description(ctx context.Context) string
- Run(ctx context.Context, input *AgentInput) *AsyncIterator[*AgentEvent]
-}
-```
-
-在 Agent Run 的实现中,一般是 Future 模式的异步执行,大体分成三步,具体可参考 adk.ChatModelAgent 中 Run 方法的实现:
-
-1. 创建一对 Iterator、Generator
-2. 启动 Agent 的异步任务,并传入 Generator,处理 AgentInput。在这个异步任务中,产生新的事件时,写入到 Generator 中,供 Agent 调用方在 Iterator 中消费
-3. 启动任务后,返回 Iterator
-
-Agent 扩展行为抽象大致可描述为:
-
-> Agent 既可添加子 Agent,也可添加父 Agent,即作为 父 Agent 的 子 Agent
->
-> Agent 在执行任务时,可根据需要将任务转让(Transfer)给其 父 Agent 或 子 Agent
-
-```go
-type OnSubAgents interface {
- OnSetSubAgents(ctx context.Context, subAgents []Agent) error
- OnSetAsSubAgent(ctx context.Context, parent Agent) error
-
- OnDisallowTransferToParent(ctx context.Context) error
-}
-```
-
-中心化的运行状态管理:
-
-在 ADK 组合产物的运行中,每个 Agent 均可通过 context 中的 `Session map[string]any` 存取一些数据,Session 对所有 Agent 可见。Session 的生命周期对应着 ADK 组合产物的一次运行的生命周期(Interrupt&Resume 视为同一个生命周期)。
-
-```go
-func SetSessionValue(ctx context.Context, key string, value any) {
- // omit code
-}
-
-func GetSessionValue(ctx context.Context, key string) (any, bool) {
- // omit code
-}
-```
-
-## Agent Compose
-
-围绕 Agent 抽象,提供多种简单易用、场景丰富的组合原语,可支撑开发丰富多样的 Multi-Agent 协同策略,比如 Supervisor、Plan-Execute、Group-Chat 等 Multi-Agent 场景。从而实现不同的 Agent 分工合作模式,处理更复杂的任务。
-
-Agent 协作过程中,可能存在的协作原语:
-
-- Agent 间协作方式
-
-| 协助方式 | 描述 | |
| Transfer | 直接将任务转让给另外一个 Agent,本 Agent 则执行结束后退出,不关心转让 Agent 的任务执行状态 | |
| ToolCall(AgentAsTool) | 将 Agent 当成 ToolCall 调用,等待 Agent 的响应,并可获取被调用Agent 的输出结果,进行下一轮处理 |
| 上下文策略 | 描述 | |
| 上游 Agent 全对话 | 获取本 Agent 的上游 Agent 的完整对话记录 | |
| 全新任务描述 | 忽略掉上游 Agent 的完整对话记录,给出一个全新的任务总结,作为子 Agent 的 AgentInput 输入 |
| 决策自主性 | 描述 | |
| 自主决策 | 在 Agent 内部,基于其可选的下游 Agent, 如需协助时,自主选择下游 Agent 进行协助。 一般来说,Agent 内部是基于 LLM 进行决策,不过即使是基于预设逻辑进行选择,从 Agent 外部看依然视为自主决策 | |
| 预设决策 | 事先预设好一个Agent 执行任务后的下一个 Agent。 Agent 的执行顺序是事先确定、可预测的 |
-
-### Workflow
-
-提供了 顺序、并行和循环三种工作流模式,供用户灵活组合出不同的工作流图
-
-在 Workflow Agent 中,每个 Agent 拿到相同的 AgentInput 输入,按照预先设定好的拓扑结构所表达的顺序依次运行
-
-#### Sequential
-
-> - Agent 间协作方式:Transfer
-> - AgentInput 的上下文策略:上游 Agent 全对话
-> - 决策自主性:预设决策
-
-将用户提供的 SubAgents 列表,组合成按照顺序依次执行的 Sequential Agent,其中的 Name 和 Description 作为 Sequential Agent 的名称标识和描述。
-
-Sequential Agent 执行时,将 SubAgents 列表,按照顺序依次执行,直至将所有 Agent 执行一遍后结束。
-
-注: 由于 Agent 只能获取到上游 Agent 的全对话,后执行的 Agent 看不到先执行的 Agent 的 AgentEvent 输出。
-
-```go
-type SequentialAgentConfig struct {
- Name string
- Description string
- SubAgents []Agent
-}
-
-func NewSequentialAgent(ctx context.Context, config *SequentialAgentConfig) (Agent, error) {
- // omit code
-}
-```
-
-
-
-#### Parallel
-
-> - Agent 间协作方式:Transfer
-> - AgentInput 的上下文策略:上游 Agent 全对话
-> - 决策自主性:预设决策
-
-将用户提供的 SubAgents 列表,组合成基于相同上下文,并发执行的 Parallel Agent,其中的 Name 和 Description 作为 Parallel Agent 的名称标识和描述。
-
-Parallel Agent 执行时,将 SubAgents 列表,并发执行,待所有 Agent 执行完成后结束。
-
-```go
-type ParallelAgentConfig struct {
- Name string
- Description string
- SubAgents []Agent
-}
-
-func NewParallelAgent(ctx context.Context, config *ParallelAgentConfig) (Agent, error) {
- // omit code
-}
-```
-
-
-
-#### Loop
-
-> - Agent 间协作方式:Transfer
-> - AgentInput 的上下文策略:上游 Agent 全对话
-> - 决策自主性:预设决策
-
-将用户提供的 SubAgents 列表,按照数组顺序依次执行,循环往复,组合成 Loop Agent,其中的 Name 和 Description 作为 Loop Agent 的名称标识和描述。
-
-Loop Agent 执行时,将 SubAgents 列表,按照顺序依次执行,并循环往复,直至配置的最大迭代次数为止。
-
-```go
-type LoopAgentConfig struct {
- Name string
- Description string
- SubAgents []Agent
-
- MaxIterations int
-}
-
-func NewLoopAgent(ctx context.Context, config *LoopAgentConfig) (Agent, error) {
- // omit code
-}
-```
-
-
-
-### AgentAsTool
-
-> - Agent 间协作方式:ToolCall
-> - AgentInput 的上下文策略:全新任务描述
-> - 决策自主性:自主决策
-
-将一个 Agent 转换成 Tool,被其他的 Agent 当成普通的 Tool 使用。
-
-注:一个 Agent 能否将其他 Agent 当成 Tool 进行调用,取决于自身的实现。adk 中提供的 ChatModelAgent 支持 AgentAsTool 的功能
-
-```go
-func NewAgentTool(_ context.Context, agent Agent, options ...AgentToolOption) tool.BaseTool {
- // omit code
-}
-```
-
-下图展示了 Agent1 把 Agent2、Agent3 当成 Tool 进行调用的过程,类似 Function Stack Call,即在 Agent1 运行过程中,将 Agent2、Agent3 当成工具函数来进行调用。
-
-- AgentAsTool 可作为 Supervisor Multi-Agent 的一种实现方式
-
-
-
-## Single Agent
-
-> Built-In Single Agent
-
-Eino ADK 中将内置多种 Single Agent 实现,方便在各种业务场景中,找到合适的 Agent 实现,开箱即用
-
-### ChatModelAgent
-
-ChatModelAgent 实现了 ReAct 范式的 Agent,基于 Eino 中的 Graph 编排出 ReAct Agent 控制流,通过 callbacks.Handler 导出 ReAct Agent 运行过程中产生的事件,转换成 AgentEvent 返回。
-
-想要进一步了解 ChatModelAgent,请看:[Eino ADK: ChatModelAgent](/zh/docs/eino/core_modules/eino_adk/agent_implementation/chat_model_agent)
-
-```go
-type ChatModelAgentConfig struct {
- Name string
- Description string
- Instruction string
-
- Model model.ToolCallingChatModel
-
- ToolsConfig ToolsConfig
-
- // optional
- GenModelInput GenModelInput
-
- // Exit tool. Optional, defaults to nil, which will generate an Exit Action.
- // The built-in implementation is 'ExitTool'
- Exit tool.BaseTool
-
- // optional
- OutputKey string
-}
-
-func NewChatModelAgent(_ context.Context, config *ChatModelAgentConfig) (*ChatModelAgent, error) {
- // omit code
-}
-```
-
-### A2AAgent
-
-> 开发中
-
-ResponseAgent
-
-> 待规划
-
-# Agent 运行
-
-AgentRunner 是 Agent 的执行器。
-
-通过 ADK 框架,编排组合出的 Multi-Agent,推荐采用 adk.NewRunner 包装执行。 只有通过 Runner 执行 agent 时,才可以使用 ADK 的如下功能:
-
-- Interrupt & Resume
-- 切面机制
-- Context 环境的预处理
-
-```go
-type RunnerConfig struct {
- EnableStreaming bool
-}
-
-func NewRunner(_ context.Context, conf RunnerConfig) *Runner {
- // omit code
-}
-```
-
-## Agent 运行示例
-
-Agent 执行时,通过 adk.NewRunner 包装执行,方便使用 adk 提供的各种扩展能力
-
-```go
-func runWithRunner() {
- ctx := context.Background()
-
- // 创建 Runner
- runner := adk.NewRunner(ctx, adk.RunnerConfig{
- EnableStreaming: true,
- })
-
- // 执行代理
- messages := []adk.Message{
- schema.UserMessage("What's the weather like today?"),
- }
-
- events := runner.Run(ctx, agent, messages)
- for {
- event, ok := events.Next()
- if !ok {
- break
- }
-
- // 处理事件
- handleEvent(event)
- }
-}
-```
diff --git a/static/img/eino/AgV7wB9hohnlNwbRTMMcSSxsnnf.png b/static/img/eino/AgV7wB9hohnlNwbRTMMcSSxsnnf.png
new file mode 100644
index 00000000000..5e10e2abb64
Binary files /dev/null and b/static/img/eino/AgV7wB9hohnlNwbRTMMcSSxsnnf.png differ
diff --git a/static/img/eino/BONAwG4YGhsXp2b3BXWcLXnGnPh.png b/static/img/eino/BONAwG4YGhsXp2b3BXWcLXnGnPh.png
new file mode 100644
index 00000000000..99d71862ccf
Binary files /dev/null and b/static/img/eino/BONAwG4YGhsXp2b3BXWcLXnGnPh.png differ
diff --git a/static/img/eino/Bd5xwDLkrhR7vRbwVgpctWnHnkJ.png b/static/img/eino/Bd5xwDLkrhR7vRbwVgpctWnHnkJ.png
new file mode 100644
index 00000000000..e55f6b66e51
Binary files /dev/null and b/static/img/eino/Bd5xwDLkrhR7vRbwVgpctWnHnkJ.png differ
diff --git a/static/img/eino/BeADw7qRvhynofbHnBvcJJwWnrc.png b/static/img/eino/BeADw7qRvhynofbHnBvcJJwWnrc.png
new file mode 100644
index 00000000000..c0037634621
Binary files /dev/null and b/static/img/eino/BeADw7qRvhynofbHnBvcJJwWnrc.png differ
diff --git a/static/img/eino/BmQywuBIshwKKGbb8Lzc313RnUg.png b/static/img/eino/BmQywuBIshwKKGbb8Lzc313RnUg.png
new file mode 100644
index 00000000000..e46c2031c91
Binary files /dev/null and b/static/img/eino/BmQywuBIshwKKGbb8Lzc313RnUg.png differ
diff --git a/static/img/eino/BwQDwnPYoh3masbDO4pcOflDnId.png b/static/img/eino/BwQDwnPYoh3masbDO4pcOflDnId.png
new file mode 100644
index 00000000000..3209bafd0e5
Binary files /dev/null and b/static/img/eino/BwQDwnPYoh3masbDO4pcOflDnId.png differ
diff --git a/static/img/eino/CwpiwGzBSh7HV1bQtJBcQ8brnHf.png b/static/img/eino/CwpiwGzBSh7HV1bQtJBcQ8brnHf.png
new file mode 100644
index 00000000000..042240dc3c2
Binary files /dev/null and b/static/img/eino/CwpiwGzBSh7HV1bQtJBcQ8brnHf.png differ
diff --git a/static/img/eino/DIJjweyWRh25ynbJWE3crGJdnne.png b/static/img/eino/DIJjweyWRh25ynbJWE3crGJdnne.png
new file mode 100644
index 00000000000..3bb9b51236d
Binary files /dev/null and b/static/img/eino/DIJjweyWRh25ynbJWE3crGJdnne.png differ
diff --git a/static/img/eino/DIWobv6ZWokDYhxTmIbcpI7ynAc.png b/static/img/eino/DIWobv6ZWokDYhxTmIbcpI7ynAc.png
new file mode 100644
index 00000000000..57bc0c66a75
Binary files /dev/null and b/static/img/eino/DIWobv6ZWokDYhxTmIbcpI7ynAc.png differ
diff --git a/static/img/eino/DRSQw67dlhjtW9bOEUqcHirtn6e.png b/static/img/eino/DRSQw67dlhjtW9bOEUqcHirtn6e.png
new file mode 100644
index 00000000000..1ecc91272ce
Binary files /dev/null and b/static/img/eino/DRSQw67dlhjtW9bOEUqcHirtn6e.png differ
diff --git a/static/img/eino/EetbwO6wIh1YCnbylPOcQXPmnaf.png b/static/img/eino/EetbwO6wIh1YCnbylPOcQXPmnaf.png
new file mode 100644
index 00000000000..b2846e59866
Binary files /dev/null and b/static/img/eino/EetbwO6wIh1YCnbylPOcQXPmnaf.png differ
diff --git a/static/img/eino/FB2FwX1S5hFciJbIGDFcAWIdn1p.png b/static/img/eino/FB2FwX1S5hFciJbIGDFcAWIdn1p.png
new file mode 100644
index 00000000000..d4b4f93b456
Binary files /dev/null and b/static/img/eino/FB2FwX1S5hFciJbIGDFcAWIdn1p.png differ
diff --git a/static/img/eino/FNtXwQ05ahvjP4bfyK4cK61ynpe.png b/static/img/eino/FNtXwQ05ahvjP4bfyK4cK61ynpe.png
new file mode 100644
index 00000000000..b733e9ba381
Binary files /dev/null and b/static/img/eino/FNtXwQ05ahvjP4bfyK4cK61ynpe.png differ
diff --git a/static/img/eino/FUn8wE2HVhsVkWbb1szc8JFDntd.png b/static/img/eino/FUn8wE2HVhsVkWbb1szc8JFDntd.png
new file mode 100644
index 00000000000..61d49ad8595
Binary files /dev/null and b/static/img/eino/FUn8wE2HVhsVkWbb1szc8JFDntd.png differ
diff --git a/static/img/eino/FrwxwAnJGhUVnvb1n05cA7N8n2e.png b/static/img/eino/FrwxwAnJGhUVnvb1n05cA7N8n2e.png
new file mode 100644
index 00000000000..99d71862ccf
Binary files /dev/null and b/static/img/eino/FrwxwAnJGhUVnvb1n05cA7N8n2e.png differ
diff --git a/static/img/eino/H0hbwjsHmhkQKobDBwLck70Gnte.png b/static/img/eino/H0hbwjsHmhkQKobDBwLck70Gnte.png
new file mode 100644
index 00000000000..86c80eecb80
Binary files /dev/null and b/static/img/eino/H0hbwjsHmhkQKobDBwLck70Gnte.png differ
diff --git a/static/img/eino/IHBYwzKJahvPOdbRdX8cpTBSnde.png b/static/img/eino/IHBYwzKJahvPOdbRdX8cpTBSnde.png
new file mode 100644
index 00000000000..860b928c2d8
Binary files /dev/null and b/static/img/eino/IHBYwzKJahvPOdbRdX8cpTBSnde.png differ
diff --git a/static/img/eino/IXAZwjLtWhrtoBbHf7HcoveonIf.png b/static/img/eino/IXAZwjLtWhrtoBbHf7HcoveonIf.png
new file mode 100644
index 00000000000..bd619e7db1b
Binary files /dev/null and b/static/img/eino/IXAZwjLtWhrtoBbHf7HcoveonIf.png differ
diff --git a/static/img/eino/IwlPwET7lhEIznb88uGc3NxunAb.png b/static/img/eino/IwlPwET7lhEIznb88uGc3NxunAb.png
new file mode 100644
index 00000000000..59867569452
Binary files /dev/null and b/static/img/eino/IwlPwET7lhEIznb88uGc3NxunAb.png differ
diff --git a/static/img/eino/IyblwV7Y8hilJKbHYgHcdfxlnre.png b/static/img/eino/IyblwV7Y8hilJKbHYgHcdfxlnre.png
new file mode 100644
index 00000000000..e46c2031c91
Binary files /dev/null and b/static/img/eino/IyblwV7Y8hilJKbHYgHcdfxlnre.png differ
diff --git a/static/img/eino/JFl7wI6gAhAS1ibi0IucZIKXnzh.png b/static/img/eino/JFl7wI6gAhAS1ibi0IucZIKXnzh.png
new file mode 100644
index 00000000000..ac1e2f9e20d
Binary files /dev/null and b/static/img/eino/JFl7wI6gAhAS1ibi0IucZIKXnzh.png differ
diff --git a/static/img/eino/JMqswdPSah2dFcbX3qzcBxgLnQc.png b/static/img/eino/JMqswdPSah2dFcbX3qzcBxgLnQc.png
new file mode 100644
index 00000000000..8eb3256e4b5
Binary files /dev/null and b/static/img/eino/JMqswdPSah2dFcbX3qzcBxgLnQc.png differ
diff --git a/static/img/eino/JN89wZZo8h2LYybXZMTcXObgnUc.png b/static/img/eino/JN89wZZo8h2LYybXZMTcXObgnUc.png
new file mode 100644
index 00000000000..ee80daafd40
Binary files /dev/null and b/static/img/eino/JN89wZZo8h2LYybXZMTcXObgnUc.png differ
diff --git a/static/img/eino/JYoHwKhfQhRmYZb6jEDcy1ofnVe.png b/static/img/eino/JYoHwKhfQhRmYZb6jEDcy1ofnVe.png
new file mode 100644
index 00000000000..ff9458c4f31
Binary files /dev/null and b/static/img/eino/JYoHwKhfQhRmYZb6jEDcy1ofnVe.png differ
diff --git a/static/img/eino/KWOJwXt40hnDvEbjGFzcgA8BnIe.png b/static/img/eino/KWOJwXt40hnDvEbjGFzcgA8BnIe.png
new file mode 100644
index 00000000000..64915e887e2
Binary files /dev/null and b/static/img/eino/KWOJwXt40hnDvEbjGFzcgA8BnIe.png differ
diff --git a/static/img/eino/Kox4wVhSjhkBXEbDIqSciZRHnvb.png b/static/img/eino/Kox4wVhSjhkBXEbDIqSciZRHnvb.png
new file mode 100644
index 00000000000..36154fa1fad
Binary files /dev/null and b/static/img/eino/Kox4wVhSjhkBXEbDIqSciZRHnvb.png differ
diff --git a/static/img/eino/L1jTwKR8WhZyEUbqQpKcgaJBnbh.png b/static/img/eino/L1jTwKR8WhZyEUbqQpKcgaJBnbh.png
new file mode 100644
index 00000000000..b8e4e0ced2b
Binary files /dev/null and b/static/img/eino/L1jTwKR8WhZyEUbqQpKcgaJBnbh.png differ
diff --git a/static/img/eino/LyR1wSzuBhi4rXbbnPJc0GYIngf.png b/static/img/eino/LyR1wSzuBhi4rXbbnPJc0GYIngf.png
new file mode 100644
index 00000000000..ff9458c4f31
Binary files /dev/null and b/static/img/eino/LyR1wSzuBhi4rXbbnPJc0GYIngf.png differ
diff --git a/static/img/eino/NAdGw4BSUh2DOrbyq66cE1Zdnyg.png b/static/img/eino/NAdGw4BSUh2DOrbyq66cE1Zdnyg.png
new file mode 100644
index 00000000000..2e4be7e1bb7
Binary files /dev/null and b/static/img/eino/NAdGw4BSUh2DOrbyq66cE1Zdnyg.png differ
diff --git a/static/img/eino/NSX1w1ZJghC4f8bmyfeczj0lnGb.png b/static/img/eino/NSX1w1ZJghC4f8bmyfeczj0lnGb.png
new file mode 100644
index 00000000000..4480f271c5c
Binary files /dev/null and b/static/img/eino/NSX1w1ZJghC4f8bmyfeczj0lnGb.png differ
diff --git a/static/img/eino/NmeTwgv9Ph15mhbxi5KcGSxKnvL.png b/static/img/eino/NmeTwgv9Ph15mhbxi5KcGSxKnvL.png
new file mode 100644
index 00000000000..6e44d1197a9
Binary files /dev/null and b/static/img/eino/NmeTwgv9Ph15mhbxi5KcGSxKnvL.png differ
diff --git a/static/img/eino/NqU6wgF0ihxz3XbVCQdcz6m6nmh.png b/static/img/eino/NqU6wgF0ihxz3XbVCQdcz6m6nmh.png
new file mode 100644
index 00000000000..6f194f45ff7
Binary files /dev/null and b/static/img/eino/NqU6wgF0ihxz3XbVCQdcz6m6nmh.png differ
diff --git a/static/img/eino/O6ezw1UfVh4jUFbTAPTcvaCzn0g.png b/static/img/eino/O6ezw1UfVh4jUFbTAPTcvaCzn0g.png
new file mode 100644
index 00000000000..8d4fcf8bda2
Binary files /dev/null and b/static/img/eino/O6ezw1UfVh4jUFbTAPTcvaCzn0g.png differ
diff --git a/static/img/eino/P98FwB163hUCwebgu6gcyZLvnSe.png b/static/img/eino/P98FwB163hUCwebgu6gcyZLvnSe.png
new file mode 100644
index 00000000000..5f90a1cb074
Binary files /dev/null and b/static/img/eino/P98FwB163hUCwebgu6gcyZLvnSe.png differ
diff --git a/static/img/eino/PSFuwhsHJhYkGDb8S45cDdcxnxf.png b/static/img/eino/PSFuwhsHJhYkGDb8S45cDdcxnxf.png
new file mode 100644
index 00000000000..3193c0ec254
Binary files /dev/null and b/static/img/eino/PSFuwhsHJhYkGDb8S45cDdcxnxf.png differ
diff --git a/static/img/eino/PprGwUBK7hoPR4bZIDDcF8vwnQg.png b/static/img/eino/PprGwUBK7hoPR4bZIDDcF8vwnQg.png
new file mode 100644
index 00000000000..9e9d14dd810
Binary files /dev/null and b/static/img/eino/PprGwUBK7hoPR4bZIDDcF8vwnQg.png differ
diff --git a/static/img/eino/QvP1wWE9RhdZLDbPYSlcBY9InWf.png b/static/img/eino/QvP1wWE9RhdZLDbPYSlcBY9InWf.png
new file mode 100644
index 00000000000..be500ef8451
Binary files /dev/null and b/static/img/eino/QvP1wWE9RhdZLDbPYSlcBY9InWf.png differ
diff --git a/static/img/eino/QyggwF16hhFesobdbUAckHD0nae.png b/static/img/eino/QyggwF16hhFesobdbUAckHD0nae.png
new file mode 100644
index 00000000000..3a68146b946
Binary files /dev/null and b/static/img/eino/QyggwF16hhFesobdbUAckHD0nae.png differ
diff --git a/static/img/eino/S1yawPJPuhMO6Ib95ircVg88nHg.png b/static/img/eino/S1yawPJPuhMO6Ib95ircVg88nHg.png
new file mode 100644
index 00000000000..934ef4de58d
Binary files /dev/null and b/static/img/eino/S1yawPJPuhMO6Ib95ircVg88nHg.png differ
diff --git a/static/img/eino/Syzww9Z2khV7uvbFzBnc2zchnGd.png b/static/img/eino/Syzww9Z2khV7uvbFzBnc2zchnGd.png
new file mode 100644
index 00000000000..d75265eab0f
Binary files /dev/null and b/static/img/eino/Syzww9Z2khV7uvbFzBnc2zchnGd.png differ
diff --git a/static/img/eino/T0StwIywMhjI4HbwCOcc847jn3e.png b/static/img/eino/T0StwIywMhjI4HbwCOcc847jn3e.png
new file mode 100644
index 00000000000..37f77aa868d
Binary files /dev/null and b/static/img/eino/T0StwIywMhjI4HbwCOcc847jn3e.png differ
diff --git a/static/img/eino/TAsuwnewYheUVqbnWSKcmR6fnNd.png b/static/img/eino/TAsuwnewYheUVqbnWSKcmR6fnNd.png
new file mode 100644
index 00000000000..8022d0dc902
Binary files /dev/null and b/static/img/eino/TAsuwnewYheUVqbnWSKcmR6fnNd.png differ
diff --git a/static/img/eino/UaTLwzyfRhWMLjbXhxCc1oTbnKb.png b/static/img/eino/UaTLwzyfRhWMLjbXhxCc1oTbnKb.png
new file mode 100644
index 00000000000..dda2011b9b7
Binary files /dev/null and b/static/img/eino/UaTLwzyfRhWMLjbXhxCc1oTbnKb.png differ
diff --git a/static/img/eino/WJ0eweFuvhq07nblYnvckVx0nzc.png b/static/img/eino/WJ0eweFuvhq07nblYnvckVx0nzc.png
new file mode 100644
index 00000000000..abd89973657
Binary files /dev/null and b/static/img/eino/WJ0eweFuvhq07nblYnvckVx0nzc.png differ
diff --git a/static/img/eino/X8eJw5cbLhpAmDbOuSiccRApnwg.png b/static/img/eino/X8eJw5cbLhpAmDbOuSiccRApnwg.png
new file mode 100644
index 00000000000..8022d0dc902
Binary files /dev/null and b/static/img/eino/X8eJw5cbLhpAmDbOuSiccRApnwg.png differ
diff --git a/static/img/eino/XwnMwCmNph3U7ib9OFQcqYgdnjg.png b/static/img/eino/XwnMwCmNph3U7ib9OFQcqYgdnjg.png
new file mode 100644
index 00000000000..432abe89ee7
Binary files /dev/null and b/static/img/eino/XwnMwCmNph3U7ib9OFQcqYgdnjg.png differ
diff --git a/static/img/eino/Y3fHwjKOyhYpd5boU95cEXMrnlb.png b/static/img/eino/Y3fHwjKOyhYpd5boU95cEXMrnlb.png
new file mode 100644
index 00000000000..c4f1b9c4be1
Binary files /dev/null and b/static/img/eino/Y3fHwjKOyhYpd5boU95cEXMrnlb.png differ
diff --git a/static/img/eino/YT3hwDq4ahfn01bxjm8cbsCdnpq.png b/static/img/eino/YT3hwDq4ahfn01bxjm8cbsCdnpq.png
new file mode 100644
index 00000000000..3193c0ec254
Binary files /dev/null and b/static/img/eino/YT3hwDq4ahfn01bxjm8cbsCdnpq.png differ
diff --git a/static/img/eino/ZAlewk2iWhP5yxbieEkchYTVnWd.png b/static/img/eino/ZAlewk2iWhP5yxbieEkchYTVnWd.png
new file mode 100644
index 00000000000..934ef4de58d
Binary files /dev/null and b/static/img/eino/ZAlewk2iWhP5yxbieEkchYTVnWd.png differ
diff --git a/static/img/eino/ZFATwEepAhUSXmbdxdYc2EWxnwh.png b/static/img/eino/ZFATwEepAhUSXmbdxdYc2EWxnwh.png
new file mode 100644
index 00000000000..7d49f012391
Binary files /dev/null and b/static/img/eino/ZFATwEepAhUSXmbdxdYc2EWxnwh.png differ
diff --git a/static/img/eino/ZyUJwOrovhipoKbeBIQcrviJn9c.png b/static/img/eino/ZyUJwOrovhipoKbeBIQcrviJn9c.png
new file mode 100644
index 00000000000..b2846e59866
Binary files /dev/null and b/static/img/eino/ZyUJwOrovhipoKbeBIQcrviJn9c.png differ
diff --git a/static/img/eino/apm_plus_callback.gif b/static/img/eino/apm_plus_callback.gif
new file mode 100644
index 00000000000..b653a3c9e3e
Binary files /dev/null and b/static/img/eino/apm_plus_callback.gif differ
diff --git a/static/img/eino/eino_projects_and_structure.png b/static/img/eino/eino_projects_and_structure.png
new file mode 100644
index 00000000000..7948bd3242d
Binary files /dev/null and b/static/img/eino/eino_projects_and_structure.png differ