diff --git a/src/docs.json b/src/docs.json index ad9d8016eb..7d20404c6a 100644 --- a/src/docs.json +++ b/src/docs.json @@ -194,12 +194,17 @@ ] }, { - "group": "Use in production", + "group": "Agent development", + "pages": [ + "oss/python/langchain/studio", + "oss/python/langchain/test", + "oss/python/langchain/ui" + ] + }, + { + "group": "Deploy with LangSmith", "pages": [ - "oss/python/langchain/studio", - "oss/python/langchain/test", "oss/python/langchain/deploy", - "oss/python/langchain/ui", "oss/python/langchain/observability" ] } @@ -242,10 +247,10 @@ "group": "Production", "pages": [ "oss/python/langgraph/application-structure", - "oss/python/langgraph/studio", "oss/python/langgraph/test", - "oss/python/langgraph/deploy", + "oss/python/langgraph/studio", "oss/python/langgraph/ui", + "oss/python/langgraph/deploy", "oss/python/langgraph/observability" ] }, @@ -530,12 +535,17 @@ ] }, { - "group": "Use in production", + "group": "Agent development", + "pages": [ + "oss/javascript/langchain/studio", + "oss/javascript/langchain/test", + "oss/javascript/langchain/ui" + ] + }, + { + "group": "Deploy with LangSmith", "pages": [ - "oss/javascript/langchain/studio", - "oss/javascript/langchain/test", "oss/javascript/langchain/deploy", - "oss/javascript/langchain/ui", "oss/javascript/langchain/observability" ] } @@ -578,10 +588,10 @@ "group": "Production", "pages": [ "oss/javascript/langgraph/application-structure", - "oss/javascript/langgraph/studio", "oss/javascript/langgraph/test", - "oss/javascript/langgraph/deploy", + "oss/javascript/langgraph/studio", "oss/javascript/langgraph/ui", + "oss/javascript/langgraph/deploy", "oss/javascript/langgraph/observability" ] }, diff --git a/src/oss/langchain/deploy.mdx b/src/oss/langchain/deploy.mdx index 0a434ddf90..60eb1b89fc 100644 --- a/src/oss/langchain/deploy.mdx +++ b/src/oss/langchain/deploy.mdx @@ -1,17 +1,18 @@ --- -title: Deploy +title: LangSmith Deployment +sidebarTitle: Deployment --- import deploy from '/snippets/oss/deploy.mdx'; -LangSmith is the fastest way to turn agents into production systems. Traditional hosting platforms are built for stateless, short-lived web apps, while LangGraph is **purpose-built for stateful, long-running agents**, so you can go from repo to reliable cloud deployment in minutes. +When you're ready to deploy your LangChain agent to production, LangSmith provides a managed hosting platform designed for agent workloads. Traditional hosting platforms are built for stateless, short-lived web applications, while LangGraph is **purpose-built for stateful, long-running agents** that require persistent state and background execution. LangSmith handles the infrastructure, scaling, and operational concerns so you can deploy directly from your repository. ## Prerequisites Before you begin, ensure you have the following: -* A [GitHub account](https://github.com/) -* A [LangSmith account](https://smith.langchain.com/) (free to sign up) +- A [GitHub account](https://github.com/) +- A [LangSmith account](https://smith.langchain.com/) (free to sign up) ## Deploy your agent diff --git a/src/oss/langchain/observability.mdx b/src/oss/langchain/observability.mdx index b400074fa5..61d4778bf5 100644 --- a/src/oss/langchain/observability.mdx +++ b/src/oss/langchain/observability.mdx @@ -1,18 +1,22 @@ --- -title: Observability +title: LangSmith Observability +sidebarTitle: Observability --- import observability from '/snippets/oss/observability.mdx'; -Observability is crucial for understanding how your agents behave in production. With LangChain's @[`create_agent`], you get built-in observability through [LangSmith](https://smith.langchain.com/) - a powerful platform for tracing, debugging, evaluating, and monitoring your LLM applications. +As you build and run agents with LangChain, you need visibility into how they behave: which [tools](/oss/langchain/tools) they call, what prompts they generate, and how they make decisions. LangChain agents built with @[`create_agent`] automatically support tracing through [LangSmith](/langsmith/home), a platform for capturing, debugging, evaluating, and monitoring LLM application behavior. -Traces capture every step your agent takes, from the initial user input to the final response, including all tool calls, model interactions, and decision points. This enables you to debug your agents, evaluate performance, and monitor usage. +[_Traces_](/langsmith/observability-concepts#traces) record every step of your agent's execution, from the initial user input to the final response, including all tool calls, model interactions, and decision points. This execution data helps you debug issues, evaluate performance across different inputs, and monitor usage patterns in production. + +This guide shows you how to enable tracing for your LangChain agents and use LangSmith to analyze their execution. ## Prerequisites Before you begin, ensure you have the following: -* A [LangSmith account](https://smith.langchain.com/) (free to sign up) +- **A LangSmith account**: Sign up (for free) or log in at [smith.langchain.com](https://smith.langchain.com). +- **A LangSmith API key**: Follow the [Create an API key](/langsmith/create-account-api-key#create-an-api-key) guide. ## Enable tracing @@ -23,11 +27,7 @@ export LANGSMITH_TRACING=true export LANGSMITH_API_KEY= ``` - -You can get your API key from your [LangSmith settings](https://smith.langchain.com/settings). - - -## Quick start +## Quickstart No extra code is needed to log a trace to LangSmith. Just run your agent code as you normally would: diff --git a/src/oss/langchain/studio.mdx b/src/oss/langchain/studio.mdx index 041eb9bc15..b5f39ef0d7 100644 --- a/src/oss/langchain/studio.mdx +++ b/src/oss/langchain/studio.mdx @@ -1,5 +1,6 @@ --- -title: Studio +title: LangSmith Studio +sidebarTitle: LangSmith Studio --- import Studio from '/snippets/oss/studio.mdx'; diff --git a/src/oss/langgraph/application-structure.mdx b/src/oss/langgraph/application-structure.mdx index d54edccddf..2f6d1f2eba 100644 --- a/src/oss/langgraph/application-structure.mdx +++ b/src/oss/langgraph/application-structure.mdx @@ -2,13 +2,13 @@ title: Application structure --- - - -## Overview - A LangGraph application consists of one or more graphs, a configuration file (`langgraph.json`), a file that specifies dependencies, and an optional `.env` file that specifies environment variables. -This guide shows a typical structure of an application and shows how the required information to deploy an application using the LangSmith is specified. +This guide shows a typical structure of an application and shows you how to provide the required configuration to deploy an application with [LangSmith Deployment](/langsmith/deployments). + + +LangSmith Deployment is a managed hosting platform for deploying and scaling LangGraph agents. It handles the infrastructure, scaling, and operational concerns so you can deploy your stateful, long-running agents directly from your repository. Learn more in the [Deployment documentation](/langsmith/deployments). + ## Key Concepts diff --git a/src/oss/langgraph/deploy.mdx b/src/oss/langgraph/deploy.mdx index 79044e19ce..04d29a5c0b 100644 --- a/src/oss/langgraph/deploy.mdx +++ b/src/oss/langgraph/deploy.mdx @@ -1,10 +1,10 @@ --- -title: Deploy +title: LangSmith Deployment --- import deploy from '/snippets/oss/deploy.mdx'; -LangSmith is the fastest way to turn agents into production systems. Traditional hosting platforms are built for stateless, short-lived web apps, while LangGraph is **purpose-built for stateful, long-running agents**, so you can go from repo to reliable cloud deployment in minutes. +When you're ready to deploy your agent to production, LangSmith provides a managed hosting platform designed for agent workloads. Traditional hosting platforms are built for stateless, short-lived web applications, while LangGraph is **purpose-built for stateful, long-running agents** that require persistent state and background execution. LangSmith handles the infrastructure, scaling, and operational concerns so you can deploy directly from your repository. ## Prerequisites diff --git a/src/oss/langgraph/observability.mdx b/src/oss/langgraph/observability.mdx index 2cecebeae0..710b171b69 100644 --- a/src/oss/langgraph/observability.mdx +++ b/src/oss/langgraph/observability.mdx @@ -1,5 +1,5 @@ --- -title: Observability +title: LangSmith Observability --- import observability from '/snippets/oss/observability.mdx'; @@ -15,7 +15,8 @@ Traces are a series of steps that your application takes to go from input to out Before you begin, ensure you have the following: -* A [LangSmith account](https://smith.langchain.com/) (free to sign up) +- **A LangSmith account**: Sign up (for free) or log in at [smith.langchain.com](https://smith.langchain.com). +- **A LangSmith API key**: Follow the [Create an API key](/langsmith/create-account-api-key#create-an-api-key) guide. ## Enable tracing diff --git a/src/oss/langgraph/overview.mdx b/src/oss/langgraph/overview.mdx index e6566b9612..faa4fed5cd 100644 --- a/src/oss/langgraph/overview.mdx +++ b/src/oss/langgraph/overview.mdx @@ -117,9 +117,19 @@ LangGraph provides low-level supporting infrastructure for *any* long-running, s While LangGraph can be used standalone, it also integrates seamlessly with any LangChain product, giving developers a full suite of tools for building agents. To improve your LLM application development, pair LangGraph with: -* [LangSmith](http://www.langchain.com/langsmith) — Helpful for agent evals and observability. Debug poor-performing LLM app runs, evaluate agent trajectories, gain visibility in production, and improve performance over time. -* [LangGraph](/oss/langgraph/overview) — Deploy and scale agents effortlessly with a purpose-built deployment platform for long running, stateful workflows. Discover, reuse, configure, and share agents across teams — and iterate quickly with visual prototyping in [Studio](/langsmith/studio). -* [LangChain](/oss/langchain/overview) - Provides integrations and composable components to streamline LLM application development. Contains agent abstractions built on top of LangGraph. + + + Trace requests, evaluate outputs, and monitor deployments in one place. Prototype locally with LangGraph, then move to production with integrated observability and evaluation to build more reliable agent systems. + + + + Deploy and scale agents effortlessly with a purpose-built deployment platform for long running, stateful workflows. Discover, reuse, configure, and share agents across teams — and iterate quickly with visual prototyping in Studio. + + + + Provides integrations and composable components to streamline LLM application development. Contains agent abstractions built on top of LangGraph. + + ## Acknowledgements diff --git a/src/oss/langgraph/studio.mdx b/src/oss/langgraph/studio.mdx index 5ce07bf5eb..42581cee8a 100644 --- a/src/oss/langgraph/studio.mdx +++ b/src/oss/langgraph/studio.mdx @@ -1,5 +1,5 @@ --- -title: Studio +title: LangSmith Studio --- import Studio from '/snippets/oss/studio.mdx'; diff --git a/src/snippets/oss/studio.mdx b/src/snippets/oss/studio.mdx index 03d86899cc..bc438b6b87 100644 --- a/src/snippets/oss/studio.mdx +++ b/src/snippets/oss/studio.mdx @@ -1,27 +1,23 @@ -This guide will walk you through how to use **Studio** to visualize, interact, and debug your agent locally. +When building agents with LangChain locally, it's helpful to visualize what's happening inside your agent, interact with it in real-time, and debug issues as they occur. **LangSmith Studio** is a free visual interface for developing and testing your LangChain agents from your local machine. -Studio is our free-to-use, powerful agent IDE that integrates with [LangSmith](/langsmith/home) to enable tracing, evaluation, and prompt engineering. See exactly how your agent thinks, trace every decision, and ship smarter, more reliable agents. +Studio connects to your locally running agent to show you each step your agent takes: the prompts sent to the model, tool calls and their results, and the final output. You can test different inputs, inspect intermediate states, and iterate on your agent's behavior without additional code or deployment. - - - +This pages describes how to set up Studio with your local LangChain agent. ## Prerequisites Before you begin, ensure you have the following: -* An API key for [LangSmith](https://smith.langchain.com/settings) (free to sign up) -## Setup local Agent server +- **A LangSmith account**: Sign up (for free) or log in at [smith.langchain.com](https://smith.langchain.com). +- **A LangSmith API key**: Follow the [Create an API key](/langsmith/create-account-api-key#create-an-api-key) guide. +- If you don't want data [traced](/langsmith/observability-concepts#traces) to LangSmith, set `LANGSMITH_TRACING=false` in your application's `.env` file. With tracing disabled, no data leaves your local server. + +## Set up local Agent server ### 1. Install the LangGraph CLI +The [LangGraph CLI](/langsmith/cli) provides a local development server (also called [Agent Server](/langsmith/agent-server)) that connects your agent to Studio. + :::python ```shell # Python >= 3.11 is required. @@ -37,7 +33,7 @@ npx @langchain/langgraph-cli ### 2. Prepare your agent -We'll use the following simple agent as an example: +If you already have a LangChain agent, you can use it directly. This example uses a simple email agent: ```python title="agent.py" from langchain.agents import create_agent @@ -62,10 +58,10 @@ agent = create_agent( ### 3. Environment variables -Create a `.env` file in the root of your project and fill in the necessary API keys. We'll need to set the `LANGSMITH_API_KEY` environment variable to the API key you get from [LangSmith](https://smith.langchain.com/settings). +Studio requires a LangSmith API key to connect your local agent. Create a `.env` file in the root of your project and add your API key from [LangSmith](https://smith.langchain.com/settings). - Be sure not to commit your `.env` to version control systems such as Git! + Ensure your `.env` file is not committed to version control, such as Git. ```bash .env @@ -74,7 +70,7 @@ LANGSMITH_API_KEY=lsv2... ### 4. Create a LangGraph config file -Inside your app's directory, create a configuration file `langgraph.json`: +The LangGraph CLI uses a configuration file to locate your agent and manage dependencies. Create a `langgraph.json` file in your app's directory: ```json title="langgraph.json" { @@ -86,13 +82,13 @@ Inside your app's directory, create a configuration file `langgraph.json`: } ``` -@[`create_agent`] automatically returns a compiled LangGraph graph that we can pass to the `graphs` key in our configuration file. +The @[`create_agent`] function automatically returns a compiled LangGraph graph, which is what the `graphs` key expects in the configuration file. -See the [LangGraph configuration file reference](/langsmith/cli#configuration-file) for detailed explanations of each key in the JSON object of the configuration file. +For detailed explanations of each key in the JSON object of the configuration file, refer to the [LangGraph configuration file reference](/langsmith/cli#configuration-file). -So far, our project structure looks like this: +At this point, the project structure will look like this: ```bash my-app/ @@ -105,7 +101,7 @@ my-app/ ### 5. Install dependencies :::python -In the root of your new LangGraph app, install the dependencies: +Install your project dependencies from the root directory: ```shell pip @@ -125,7 +121,7 @@ yarn install ### 6. View your agent in Studio -Start your Agent server: +Start the development server to connect your agent to Studio: :::python ```shell @@ -143,18 +139,34 @@ npx @langchain/langgraph-cli dev Safari blocks `localhost` connections to Studio. To work around this, run the above command with `--tunnel` to access Studio via a secure tunnel. -Your agent will be accessible via API (`http://127.0.0.1:2024`) and the Studio UI `https://smith.langchain.com/studio/?baseUrl=http://127.0.0.1:2024`: +Once the server is running, your agent is accessible both via API at `http://127.0.0.1:2024` and through the Studio UI at `https://smith.langchain.com/studio/?baseUrl=http://127.0.0.1:2024`: ![Agent view in the Studio UI](/oss/images/studio_create-agent.png) -Studio makes each step of your agent easily observable. Replay any input and inspect the exact prompt, tool arguments, return values, and token/latency metrics. If a tool throws an exception, Studio records it with surrounding state so you can spend less time debugging. +With Studio connected to your local agent, you can iterate quickly on your agent's behavior. Run a test input, inspect the full execution trace including prompts, tool arguments, return values, and token/latency metrics. When something goes wrong, Studio captures exceptions with the surrounding state to help you understand what happened. -Keep your dev server running, edit prompts or tool signatures, and watch Studio hot-reload. Re-run the conversation thread from any step to verify behavior changes. See [Manage threads](/langsmith/use-studio#edit-thread-history) for more details. +The development server supports hot-reloading—make changes to prompts or tool signatures in your code, and Studio reflects them immediately. Re-run conversation threads from any step to test your changes without starting over. This workflow scales from simple single-tool agents to complex multi-node graphs. -As your agent grows, the same view scales from a single-tool demo to multi-node graphs, keeping decisions legible and reproducible. +For more information on how to run Studio, refer to the following guides in the [LangSmith docs](/langsmith/home): - -For an in-depth look at Studio, check out the [overview page](/langsmith/studio). - +- [Run application](/langsmith/use-studio#run-application) +- [Manage assistants](/langsmith/use-studio#manage-assistants) +- [Manage threads](/langsmith/use-studio#manage-threads) +- [Iterate on prompts](/langsmith/observability-studio) +- [Debug LangSmith traces](/langsmith/observability-studio#debug-langsmith-traces) +- [Add node to dataset](/langsmith/observability-studio#add-node-to-dataset) + +## Video guide + + + +