Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
339 changes: 339 additions & 0 deletions examples/tracing/langchain/async_langchain_callback.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,339 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/openlayer-ai/openlayer-python/blob/main/examples/tracing/langchain/async_langchain_callback.ipynb)\n",
"\n",
"# <a id=\"top\">Openlayer Async LangChain Callback Handler</a>\n",
"\n",
"This notebook demonstrates how to use Openlayer's **AsyncOpenlayerHandler** to monitor async LLMs, chains, tools, and agents built with LangChain.\n",
"\n",
"The AsyncOpenlayerHandler provides:\n",
"- Full async/await support for non-blocking operations\n",
"- Proper trace management in async environments\n",
"- Support for concurrent LangChain operations\n",
"- Comprehensive monitoring of async chains, tools, and agents\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 1. Installation\n",
"\n",
"Install the required packages:\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install openlayer langchain langchain_openai langchain_community"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 2. Environment Setup\n",
"\n",
"Configure your API keys and Openlayer settings:\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import asyncio\n",
"\n",
"# OpenAI API key\n",
"os.environ[\"OPENAI_API_KEY\"] = \"\"\n",
"\n",
"# Openlayer configuration\n",
"os.environ[\"OPENLAYER_API_KEY\"] = \"\"\n",
"os.environ[\"OPENLAYER_INFERENCE_PIPELINE_ID\"] = \"\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 3. Instantiate the AsyncOpenlayerHandler\n",
"\n",
"Create the async callback handler:\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from openlayer.lib.integrations import langchain_callback\n",
"\n",
"# Create the async callback handler\n",
"async_openlayer_handler = langchain_callback.AsyncOpenlayerHandler(\n",
" # Optional: Add custom metadata that will be attached to all traces\n",
" user_id=\"demo_user\",\n",
" environment=\"development\",\n",
" session_id=\"async_langchain_demo\",\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 4. Basic Async Chat Example\n",
"\n",
"Let's start with a simple async chat completion:\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.schema import HumanMessage, SystemMessage\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"\n",
"async def basic_async_chat():\n",
" \"\"\"Demonstrate basic async chat with tracing.\"\"\"\n",
"\n",
" # Create async chat model with callback\n",
" chat = ChatOpenAI(model=\"gpt-3.5-turbo\", max_tokens=100, temperature=0.7, callbacks=[async_openlayer_handler])\n",
"\n",
" # Single async invocation\n",
" messages = [\n",
" SystemMessage(content=\"You are a helpful AI assistant.\"),\n",
" HumanMessage(content=\"What are the benefits of async programming in Python?\"),\n",
" ]\n",
"\n",
" response = await chat.ainvoke(messages)\n",
" \n",
" return response\n",
"\n",
"\n",
"# Run the basic example\n",
"response = await basic_async_chat()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 5. Concurrent Async Operations\n",
"\n",
"Demonstrate the power of async with concurrent operations:\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"async def concurrent_chat_operations():\n",
" \"\"\"Demonstrate concurrent async chat operations with individual tracing.\"\"\"\n",
"\n",
" chat = ChatOpenAI(model=\"gpt-3.5-turbo\", max_tokens=75, temperature=0.5, callbacks=[async_openlayer_handler])\n",
"\n",
" # Define multiple questions to ask concurrently\n",
" questions = [\n",
" \"What is machine learning?\",\n",
" \"Explain quantum computing in simple terms.\",\n",
" \"What are the benefits of renewable energy?\",\n",
" \"How does blockchain technology work?\",\n",
" ]\n",
"\n",
"\n",
"\n",
" # Create concurrent tasks\n",
" tasks = []\n",
" for i, question in enumerate(questions):\n",
" messages = [\n",
" SystemMessage(content=f\"You are expert #{i + 1}. Give a concise answer.\"),\n",
" HumanMessage(content=question),\n",
" ]\n",
" task = chat.ainvoke(messages)\n",
" tasks.append((question, task))\n",
"\n",
" # Execute all tasks concurrently\n",
" import time\n",
"\n",
" start_time = time.time()\n",
"\n",
" results = await asyncio.gather(*[task for _, task in tasks])\n",
"\n",
" end_time = time.time()\n",
"\n",
" return results\n",
"\n",
"\n",
"# Run concurrent operations\n",
"concurrent_results = await concurrent_chat_operations()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 6. Async Streaming Example\n",
"\n",
"Demonstrate async streaming with token-by-token generation:\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"async def async_streaming_example():\n",
" \"\"\"Demonstrate async streaming with tracing.\"\"\"\n",
"\n",
" # Create streaming chat model\n",
" streaming_chat = ChatOpenAI(\n",
" model=\"gpt-3.5-turbo\", max_tokens=200, temperature=0.7, streaming=True, callbacks=[async_openlayer_handler]\n",
" )\n",
"\n",
"\n",
"\n",
" messages = [\n",
" SystemMessage(content=\"You are a creative storyteller.\"),\n",
" HumanMessage(content=\"Tell me a short story about a robot learning to paint.\"),\n",
" ]\n",
"\n",
" # Stream the response\n",
" full_response = \"\"\n",
" async for chunk in streaming_chat.astream(messages):\n",
" if chunk.content:\n",
" full_response += chunk.content\n",
"\n",
" return full_response\n",
"\n",
"\n",
"# Run streaming example\n",
"streaming_result = await async_streaming_example()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 7. Async Chain Example\n",
"\n",
"Create and run an async chain with proper tracing:\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains import LLMChain\n",
"from langchain_openai import OpenAI\n",
"from langchain.prompts import PromptTemplate\n",
"\n",
"\n",
"async def async_chain_example():\n",
" \"\"\"Demonstrate async LLM chain with tracing.\"\"\"\n",
"\n",
" # Create LLM with callback\n",
" llm = OpenAI(model=\"gpt-3.5-turbo-instruct\", max_tokens=150, temperature=0.8, callbacks=[async_openlayer_handler])\n",
"\n",
" # Create a prompt template\n",
" prompt = PromptTemplate(\n",
" input_variables=[\"topic\", \"audience\"],\n",
" template=\"\"\"\n",
" Write a brief explanation about {topic} for {audience}.\n",
" Make it engaging and easy to understand.\n",
" \n",
" Topic: {topic}\n",
" Audience: {audience}\n",
" \n",
" Explanation:\n",
" \"\"\",\n",
" )\n",
"\n",
" # Create the chain\n",
" chain = LLMChain(llm=llm, prompt=prompt, callbacks=[async_openlayer_handler])\n",
"\n",
"\n",
"\n",
" # Run the chain asynchronously\n",
" result = await chain.arun(topic=\"artificial intelligence\", audience=\"high school students\")\n",
"\n",
" return result\n",
"\n",
"\n",
"# Run the chain example\n",
"chain_result = await async_chain_example()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Summary\n",
"\n",
"🎉 **Congratulations!** You've successfully explored the **AsyncOpenlayerHandler** for LangChain.\n",
"\n",
"### What we covered:\n",
"\n",
"1. **Basic Setup** - Installing packages and configuring the AsyncOpenlayerHandler\n",
"2. **Simple Async Chat** - Basic async chat completions with tracing\n",
"3. **Concurrent Operations** - Running multiple async operations simultaneously\n",
"4. **Async Streaming** - Token-by-token generation with async streaming\n",
"5. **Async Chains** - Building and running async LangChain chains\n",
"\n",
"### Key Benefits of AsyncOpenlayerHandler:\n",
"\n",
"✅ **Non-blocking operations** - Your application stays responsive \n",
"✅ **Concurrent execution** - Run multiple LLM calls simultaneously \n",
"✅ **Proper trace management** - Each operation gets its own trace \n",
"✅ **Full async/await support** - Works seamlessly with async LangChain components \n",
"✅ **Custom metadata** - Attach custom information to traces \n",
"\n",
"### Next Steps:\n",
"\n",
"- Check your **Openlayer dashboard** to see all the traces generated\n",
"- Integrate AsyncOpenlayerHandler into your production async applications\n",
"- Experiment with different LangChain async components\n",
"\n",
"**Happy async tracing!** 🚀\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.16"
}
},
"nbformat": 4,
"nbformat_minor": 2
}