You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
importasynciofromtypingimportAnnotatedfromtyping_extensionsimportTypedDictfromlanggraph.graph.messageimportadd_messagesfromlangchain_core.toolsimporttoolfromlanggraph.prebuiltimportToolNodefromlangchain_openaiimportChatOpenAIfromlangchain_core.runnablesimportRunnableConfigfromlanggraph.graphimportEND, START, StateGraphfromlangchain_core.messagesimportAIMessageChunk, HumanMessage, SystemMessage, AnyMessage"""This script build an agent by langgraph and stream LLM tokenspip install langchain==0.2.16pip install langgraph==0.2.34pip install langchain_openai==0.1.9"""classState(TypedDict):
messages: Annotated[list, add_messages]
@tooldefsearch(query: str):
"""Call to surf the web."""return ["Cloudy with a chance of hail."]
tools= [search]
model=ChatOpenAI(
temperature=0,
# model="glm-4",model="GLM-4-Flash",
openai_api_key="[You Key]",
# openai_api_base="https://open.bigmodel.cn/api/paas/v4/", #使用智谱官方提供的是正常流式输出openai_api_base="You url by glm_server.py ",
streaming=True
)
classAgent:
def__init__(self, model, tools, system=""):
self.system=systemworkflow=StateGraph(State)
workflow.add_node("agent", self.call_model)
workflow.add_node("tools", ToolNode(tools))
workflow.add_edge(START, "agent")
workflow.add_conditional_edges(
# First, we define the start node. We use `agent`.# This means these are the edges taken after the `agent` node is called."agent",
# Next, we pass in the function that will determine which node is called next.self.should_continue,
# Next we pass in the path map - all the nodes this edge could go to
["tools", END],
)
workflow.add_edge("tools", "agent")
self.model=model.bind_tools(tools)
self.app=workflow.compile()
defshould_continue(self, state: State):
messages=state["messages"]
last_message=messages[-1]
# If there is no function call, then we finishifnotlast_message.tool_calls:
returnEND# Otherwise if there is, we continueelse:
return"tools"asyncdefcall_model(self, state: State, config: RunnableConfig):
messages=state["messages"]
ifself.system:
messages= [SystemMessage(content=self.system)] +messagesresponse=awaitself.model.ainvoke(messages, config)
# We return a list, because this will get added to the existing listreturn {"messages": response}
asyncdefquery(self, user_input: str):
inputs= [HumanMessage(content=user_input)]
first=Trueasyncformsg, metadatainself.app.astream({"messages": inputs}, stream_mode="messages"):
ifmsg.contentandnotisinstance(msg, HumanMessage):
# 这里可以看出是否正常流式输出print(msg.content, end="|", flush=True)
ifisinstance(msg, AIMessageChunk):
iffirst:
gathered=msgfirst=Falseelse:
gathered=gathered+msgifmsg.tool_call_chunks:
print('tool_call_chunks...', gathered.tool_calls)
if__name__=='__main__':
input="what is the weather in sf"prompt=""" You are smart research assistant. Use the search engine ... """agent=Agent(model, tools, prompt)
asyncio.run(agent.query(input))
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
解决
glm_server.py代码带有tools流式输出时,没有按照预期效果流式输出的问题,详见 issues THUDM#618,依赖包:
使用如下代码:
记得更换URL 和 你的KEY(如果有)
这个代码示例链接GitHub分支