LangGraph-based procurement agent with step-by-step workflow, tools integration, and document processing subagent.
LangGraph Workflow Agent
├── Workflow Steps (State Machine)
│ ├── greeting
│ ├── collect_info (step-by-step)
│ ├── search_suppliers
│ ├── price_discussion
│ └── finalize
├── Tools (called when needed)
│ ├── get_company_info
│ ├── get_product_info
│ └── get_supplier_info
└── MSP Node (Document Subagent)
└── Document processing when needed
- Greeting: Introduces the agent
- Collect Info: Asks 1-2 questions at a time (product → quantity → timeline)
- Search Suppliers: Finds matching suppliers
- Price Discussion: Discusses pricing only when user asks
- Finalize: Summarizes and provides next steps
- Tools are called only when needed
- Agent decides when to use
get_company_info,get_product_info,get_supplier_info - Tools execute in dedicated node, then workflow continues
- Separate node for document processing
- Activated when user uploads documents or needs document analysis
- Returns to main workflow after processing
{
"messages": [...], # Conversation history
"current_step": "greeting", # Current workflow step
"context": {...}, # Company/product/supplier context
"needs_tool": False, # Tool execution flag
"needs_msp": False # MSP activation flag
}pip install -r requirements.txtCreate .env file:
OPENAI_API_KEY=your_api_key_herefrom agent import create_my_agent
from langchain_core.messages import HumanMessage
# Create agent
agent = create_my_agent()
# Configure thread
config = {"configurable": {"thread_id": "user_123"}}
# Start conversation
initial_state = {
"messages": [],
"current_step": "greeting",
"context": {},
"needs_tool": False,
"needs_msp": False
}
response = agent.invoke(initial_state, config=config)
print(response['messages'][-1].content)
# Continue conversation
current_state = agent.get_state(config)
new_messages = current_state.values["messages"] + [
HumanMessage(content="I need to buy laptops")
]
response = agent.invoke(
{**current_state.values, "messages": new_messages},
config=config
)
print(response['messages'][-1].content)python test_integrated_agent.pySTART
↓
greeting (if new conversation)
↓
collect_info ←→ tools (if needs_tool)
↓ ↘
↓ msp (if needs_msp)
↓ ↗
search_suppliers ←→ tools (if needs_tool)
↓
price_discussion ←→ tools (if needs_tool)
↓
finalize
↓
END
Agent: Hello! I'm your procurement assistant. What do you need to procure today?
User: I need to buy office computers
Agent: What type of computers do you need? Desktop or laptop?
User: Laptops for office work
Agent: How many laptops do you need?
User: 10 laptops
Agent: When do you need them delivered?
User: Next month
Agent: Thank you! Let me search for suitable suppliers who can provide
10 office laptops by next month...
Agent: I've found several suppliers. Would you like to discuss pricing?
User: Yes, how much will it cost?
Agent: Based on your requirements, I can provide quotes from our suppliers...
- Each step has specific responsibility
- Agent asks 1-2 questions at a time
- No rushing to pricing
- Agent decides when to call tools
needs_toolflag triggers tool node- After tools execute, workflow continues
needs_mspflag triggers document processing- Separate subagent handles document analysis
- Returns to main workflow after processing
- MemorySaver maintains conversation state
- Thread ID tracks individual conversations
- State includes messages, step, context, flags
def new_step_node(state: ProcurementState) -> ProcurementState:
"""Your new step"""
system_prompt = "Your instructions here"
messages = [SystemMessage(content=system_prompt)] + state["messages"]
response = model.invoke(messages)
return {
**state,
"messages": state["messages"] + [response],
"current_step": "next_step"
}
# Add to workflow
workflow.add_node("new_step", new_step_node)
workflow.add_edge("previous_step", "new_step")from langchain_core.tools import tool
@tool
def my_new_tool(query: str) -> str:
"""Tool description"""
# Your logic here
return result
# Add to tools list
tools = [get_company_info, get_product_info, get_supplier_info, my_new_tool]Replace msp_node function with your document processing logic:
def msp_node(state: ProcurementState) -> ProcurementState:
"""Call document processing subagent"""
# Your document processing logic
# Could call external API, another LangGraph, etc.
return {
**state,
"messages": state["messages"] + [response],
"needs_msp": False
}✅ Structured workflow - Always follows same path ✅ Human-like interaction - Asks questions gradually ✅ Smart tool usage - Tools called only when needed ✅ Document processing - Dedicated MSP node ✅ State persistence - Maintains conversation context ✅ Flexible - Easy to add steps, tools, or modify flow ✅ No Deep Agent dependency - Pure LangGraph implementation
langgraph: Workflow orchestrationlangchain-openai: LLM integrationlangchain-core: Core componentspython-dotenv: Environment variablespydantic: Data validation
MIT