From cf611814cc947bd6af138277655ec99e08a2d2c3 Mon Sep 17 00:00:00 2001 From: SrijanShovit225 Date: Fri, 7 Nov 2025 23:16:16 +0530 Subject: [PATCH] Update agentic-rag.mdx: rewrite_question Node: Role and Content based dictionary replaced with LangChain Human Message As per the docs live officially, in the rewrite_question node, the response of the model for rewriting the query is returned in dictionary format to messages in the state. return {"messages": [{"role": "user", "content": response.content}]} I understand this has been done keeping in mind to modify the message's role from ai/assistant to user in the state. While this is not critical for the time being, it might cause errors while printing graph's streaming updates. for node, update in chunk.items(): print("Update from node", node) update["messages"][-1].pretty_print() print("\n\n") This might have been skipped due to running an example where rewrite node was not required. However, if the rewrite node comes into play, it would push the dictionary-based format of messages and not the standard Messages class format. The pretty print function is valid only on Message classes and not a dictionary; this would lead to an attribute error: AttributeError: 'dict' object has no attribute 'pretty_print' For AI learners, especially those who have been exploring LangChain v1.0 docs might get blocked due to this while following the tutorial. Please merge this one so that it adds to the quality and trust for LangChain docs. --- src/oss/langgraph/agentic-rag.mdx | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/src/oss/langgraph/agentic-rag.mdx b/src/oss/langgraph/agentic-rag.mdx index 10e25abec4..ee59530595 100644 --- a/src/oss/langgraph/agentic-rag.mdx +++ b/src/oss/langgraph/agentic-rag.mdx @@ -523,6 +523,8 @@ Note that the components will operate on the [`MessagesState`](/oss/langgraph/gr :::python 1. Build the `rewrite_question` node. The retriever tool can return potentially irrelevant documents, which indicates a need to improve the original user question. To do so, we will call the `rewrite_question` node: ```python + from langchain.messages import HumanMessage + REWRITE_PROMPT = ( "Look at the input and try to reason about the underlying semantic intent / meaning.\n" "Here is the initial question:" @@ -539,7 +541,7 @@ Note that the components will operate on the [`MessagesState`](/oss/langgraph/gr question = messages[0].content prompt = REWRITE_PROMPT.format(question=question) response = response_model.invoke([{"role": "user", "content": prompt}]) - return {"messages": [{"role": "user", "content": response.content}]} + return {"messages": [HumanMessage(content=response.content)]} ``` 2. Try it out: ```python @@ -567,7 +569,7 @@ Note that the components will operate on the [`MessagesState`](/oss/langgraph/gr } response = rewrite_question(input) - print(response["messages"][-1]["content"]) + print(response["messages"][-1].content) ``` **Output:** ```