Skip to content

Package for comparing and monitoring performance of LangGraph multi-agent architectures.

License

Notifications You must be signed in to change notification settings

serafinski/LangGraph-Compare

Repository files navigation

LangGraph Compare

Documentation

Documentation is available at: https://serafinski.github.io/LangGraph-Compare/

Purpose

This Python package facilitates the parsing of run logs generated by LangGraph. During execution, logs are stored in an SQLite database in an encoded format (using msgpack). These logs are then decoded and exported to a json format. Subsequently, the json files are transformed into csv files for further analysis.

Once in csv format, the data can be analyzed using methods from the py4pm library. These methods calculate specific statistics related to the multi-agent infrastructure's performance and enable visualizations of the process behavior and execution flow.

This pipeline provides a streamlined approach for extracting, transforming, and analyzing logs, offering valuable insights into multi-agent systems.

Installation

This package requires Python 3.9 or higher. Check below for more information on creating environment.

If you would like to develop this package, use poetry with Python 3.10 - since 3.10 is the needed minimum by Sphinx. Install needed dependencies with:

poetry install --with dev,test,docs

Prerequisites

This package requires Graphviz to be installed on your system.

Windows

Download the Graphviz installer from the Graphviz website.

macOS

Install Graphviz using Homebrew:

brew install graphviz

Linux

For Debian, Ubuntu, use the following command:

sudo apt-get install graphviz

For Fedora, Rocky Linux, RHEL or CentOS use the following command:

sudo dnf install graphviz

Environment setup

To create virtual environment (using conda), use the following commands:

conda create -n langgraph_compare python=3.9
conda activate langgraph_compare
pip install langgraph_compare

Basic Example

This example is based on the Building a Basic Chatbot from LangGraph documentation.

It will require You to install the following packages (besides langgraph_compare):

pip install python-dotenv langchain-openai

Example:

from dotenv import load_dotenv
from typing import Annotated

from typing_extensions import TypedDict
from langchain_openai import ChatOpenAI
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages

from langgraph_compare import *

exp = create_experiment("main")
memory = exp.memory

load_dotenv()

class State(TypedDict):
    messages: Annotated[list, add_messages]

graph_builder = StateGraph(State)

llm = ChatOpenAI(model="gpt-4o-mini")

def chatbot(state: State):
    return {"messages": [llm.invoke(state["messages"])]}

graph_builder.add_node("chatbot_node", chatbot)

graph_builder.add_edge(START, "chatbot_node")
graph_builder.add_edge("chatbot_node", END)

graph = graph_builder.compile(checkpointer=memory)

print()
run_multiple_iterations(graph, 1, 5, {"messages": [("user", "Tell me a joke")]})
print()

graph_config = GraphConfig(
    nodes=["chatbot_node"]
)

prepare_data(exp, graph_config)

print()
event_log = load_event_log(exp)
print_analysis(event_log)
print()

generate_artifacts(event_log, graph, exp)

When You have multiple architectures analyzed, You can use the following code to compare them (by default, it will look in experiments directory):

from langgraph_compare import compare

infrastructures = ["main", "other1", "other2"]

compare(infrastructures)

This should generate a file in a comparison_reports directory, with the name: main_vs_other1_vs_other2.html.

About

Package for comparing and monitoring performance of LangGraph multi-agent architectures.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published