Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.9", "3.10", "3.11", "3.12", "3.13"]
python-version: ["3.10", "3.11", "3.12", "3.13"]
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }}
Expand Down Expand Up @@ -48,7 +48,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.9", "3.10", "3.11", "3.12", "3.13"]
python-version: ["3.10", "3.11", "3.12", "3.13"]
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }}
Expand All @@ -75,7 +75,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.9", "3.10", "3.11", "3.12", "3.13"]
python-version: ["3.10", "3.11", "3.12", "3.13"]
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }}
Expand All @@ -102,7 +102,7 @@ jobs:
runs-on: LargeBois
strategy:
matrix:
python-version: ["3.9", "3.10", "3.11", "3.12", "3.13"]
python-version: ["3.10", "3.11", "3.12", "3.13"]
# TODO: fix errors so that we can run both `make dev` and `make full`
# dependencies: ['dev', 'full']
# dependencies: ["full"]
Expand Down
4 changes: 1 addition & 3 deletions .github/workflows/cli-compatibility.yml
Original file line number Diff line number Diff line change
Expand Up @@ -10,12 +10,10 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.9", "3.10", "3.11", "3.12", "3.13"]
python-version: ["3.10", "3.11", "3.12", "3.13"]
typer-version: ["0.16.0", "0.17.0", "0.18.0", "0.19.2"]
click-version: ["8.1.0", "8.2.0"]
exclude:
- python-version: "3.9"
click-version: "8.2.0"
- typer-version: "0.16.0"
click-version: "8.2.0"
- typer-version: "0.16.0"
Expand Down
223 changes: 223 additions & 0 deletions docs/dist/examples/langchain_integration.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,223 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "0f4d10ab",
"metadata": {},
"source": [
"# LangChain\n",
"\n",
"## Overview\n",
"\n",
"This is a comprehensive guide on integrating Guardrails with [LangChain](https://github.com/langchain-ai/langchain), a framework for developing applications powered by large language models. By combining the validation capabilities of Guardrails with the flexible architecture of LangChain, you can create reliable and robust AI applications.\n",
"\n",
"### Key Features\n",
"\n",
"- **Easy Integration**: Guardrails can be seamlessly added to LangChain's LCEL syntax, allowing for quick implementation of validation checks.\n",
"- **Flexible Validation**: Guardrails provides various validators that can be used to enforce structural, type, and quality constraints on LLM outputs.\n",
"- **Corrective Actions**: When validation fails, Guardrails can take corrective measures, such as retrying LLM prompts or fixing outputs.\n",
"- **Compatibility**: Works with different LLMs and can be used in various LangChain components like chains, agents, and retrieval strategies.\n",
"\n",
"## Prerequisites\n",
"\n",
"1. Ensure you have the following langchain packages installed. Also install Guardrails"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "382bd905",
"metadata": {},
"outputs": [],
"source": [
"! pip install guardrails-ai langchain langchain_openai"
]
},
{
"cell_type": "markdown",
"id": "3594ef6c",
"metadata": {},
"source": [
"2. As a prerequisite we install the necessary validators from the Guardrails Hub:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "05635d8e",
"metadata": {},
"outputs": [],
"source": [
"! guardrails hub install hub://guardrails/competitor_check --quiet\n",
"! guardrails hub install hub://guardrails/toxic_language --quiet"
]
},
{
"cell_type": "markdown",
"id": "9ab6fdd9",
"metadata": {},
"source": [
"- `CompetitorCheck`: Identifies and optionally removes mentions of specified competitor names.\n",
"- `ToxicLanguage`: Detects and optionally removes toxic or inappropriate language from the output.\n"
]
},
{
"cell_type": "markdown",
"id": "325a69d3",
"metadata": {},
"source": [
"## Basic Integration\n",
"\n",
"Here's a basic example of how to integrate Guardrails with a LangChain LCEL chain:\n",
"\n",
"1. Import the required imports and do the OpenAI Model Initialization"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "ae149cdb",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/Users/calebcourier/Projects/support/langchain/.venv/lib/python3.12/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
" from .autonotebook import tqdm as notebook_tqdm\n"
]
}
],
"source": [
"from langchain_openai import ChatOpenAI\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"model = ChatOpenAI(model=\"gpt-4\")"
]
},
{
"cell_type": "markdown",
"id": "7701d9e4",
"metadata": {},
"source": [
"2. Create a Guard object with two validators: CompetitorCheck and ToxicLanguage."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "67f810db",
"metadata": {},
"outputs": [],
"source": [
"from guardrails import Guard\n",
"from guardrails.hub import CompetitorCheck, ToxicLanguage\n",
"\n",
"competitors_list = [\"delta\", \"american airlines\", \"united\"]\n",
"guard = Guard().use_many(\n",
" CompetitorCheck(competitors=competitors_list, on_fail=\"fix\"),\n",
" ToxicLanguage(on_fail=\"filter\"),\n",
")"
]
},
{
"cell_type": "markdown",
"id": "aff173e9",
"metadata": {},
"source": [
"3. Define the LCEL chain components and pipe the prompt, model, output parser, and the Guard together.\n",
"The `guard.to_runnable()` method converts the Guardrails guard into a LangChain-compatible runnable object."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "7d6c4dc1",
"metadata": {},
"outputs": [],
"source": [
"prompt = ChatPromptTemplate.from_template(\"Answer this question {question}\")\n",
"output_parser = StrOutputParser()\n",
"\n",
"chain = prompt | model | guard.to_runnable() | output_parser"
]
},
{
"cell_type": "markdown",
"id": "f293a6f7",
"metadata": {},
"source": [
"4. Invoke the chain"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "7c037923",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/Users/calebcourier/Projects/support/langchain/.venv/lib/python3.12/site-packages/guardrails/validator_service/__init__.py:84: UserWarning: Could not obtain an event loop. Falling back to synchronous validation.\n",
" warnings.warn(\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"1. Southwest Airlines\n",
"2. Delta Air Lines\n",
"3. American Airlines\n",
"4. United Airlines\n",
"5. JetBlue Airways\n"
]
}
],
"source": [
"result = chain.invoke(\n",
" {\"question\": \"What are the top five airlines for domestic travel in the US?\"}\n",
")\n",
"print(result)"
]
},
{
"cell_type": "markdown",
"id": "2cd4c67b",
"metadata": {},
"source": [
"Example output:\n",
" ```\n",
" 1. Southwest Airlines\n",
" 3. JetBlue Airways\n",
" ```\n",
"\n",
"In this example, the chain sends the question to the model and then applies Guardrails validators to the response. The CompetitorCheck validator specifically removes mentions of the specified competitors (Delta, American Airlines, United), resulting in a filtered list of non-competitor airlines."
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.16"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
120 changes: 120 additions & 0 deletions docs/dist/examples/langchain_integration.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,120 @@
import CodeOutputBlock from '../../code-output-block.jsx';

# LangChain

## Overview

This is a comprehensive guide on integrating Guardrails with [LangChain](https://github.com/langchain-ai/langchain), a framework for developing applications powered by large language models. By combining the validation capabilities of Guardrails with the flexible architecture of LangChain, you can create reliable and robust AI applications.

### Key Features

- **Easy Integration**: Guardrails can be seamlessly added to LangChain's LCEL syntax, allowing for quick implementation of validation checks.
- **Flexible Validation**: Guardrails provides various validators that can be used to enforce structural, type, and quality constraints on LLM outputs.
- **Corrective Actions**: When validation fails, Guardrails can take corrective measures, such as retrying LLM prompts or fixing outputs.
- **Compatibility**: Works with different LLMs and can be used in various LangChain components like chains, agents, and retrieval strategies.

## Prerequisites

1. Ensure you have the following langchain packages installed. Also install Guardrails

<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! Instead, edit the notebook w/the location & name as this file. -->


```bash
pip install guardrails-ai langchain langchain_openai
```

2. As a prerequisite we install the necessary validators from the Guardrails Hub:


```bash
guardrails hub install hub://guardrails/competitor_check --quiet
guardrails hub install hub://guardrails/toxic_language --quiet
```

- `CompetitorCheck`: Identifies and optionally removes mentions of specified competitor names.
- `ToxicLanguage`: Detects and optionally removes toxic or inappropriate language from the output.


## Basic Integration

Here's a basic example of how to integrate Guardrails with a LangChain LCEL chain:

1. Import the required imports and do the OpenAI Model Initialization


```python
from langchain_openai import ChatOpenAI
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate

model = ChatOpenAI(model="gpt-4")
```

<CodeOutputBlock lang="python">

```
/Users/calebcourier/Projects/support/langchain/.venv/lib/python3.12/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html
from .autonotebook import tqdm as notebook_tqdm
```

</CodeOutputBlock>

2. Create a Guard object with two validators: CompetitorCheck and ToxicLanguage.


```python
from guardrails import Guard
from guardrails.hub import CompetitorCheck, ToxicLanguage

competitors_list = ["delta", "american airlines", "united"]
guard = Guard().use_many(
CompetitorCheck(competitors=competitors_list, on_fail="fix"),
ToxicLanguage(on_fail="filter"),
)
```

3. Define the LCEL chain components and pipe the prompt, model, output parser, and the Guard together.
The `guard.to_runnable()` method converts the Guardrails guard into a LangChain-compatible runnable object.


```python
prompt = ChatPromptTemplate.from_template("Answer this question {question}")
output_parser = StrOutputParser()

chain = prompt | model | guard.to_runnable() | output_parser
```

4. Invoke the chain


```python
result = chain.invoke(
{"question": "What are the top five airlines for domestic travel in the US?"}
)
print(result)
```

<CodeOutputBlock lang="python">

```
/Users/calebcourier/Projects/support/langchain/.venv/lib/python3.12/site-packages/guardrails/validator_service/__init__.py:84: UserWarning: Could not obtain an event loop. Falling back to synchronous validation.
warnings.warn(


1. Southwest Airlines
2. Delta Air Lines
3. American Airlines
4. United Airlines
5. JetBlue Airways
```

</CodeOutputBlock>

Example output:
```
1. Southwest Airlines
3. JetBlue Airways
```

In this example, the chain sends the question to the model and then applies Guardrails validators to the response. The CompetitorCheck validator specifically removes mentions of the specified competitors (Delta, American Airlines, United), resulting in a filtered list of non-competitor airlines.
Loading
Loading