Skip to content

Commit

Permalink
docs: lcel how to and cheatsheet (langchain-ai#21851)
Browse files Browse the repository at this point in the history
  • Loading branch information
baskaryan authored and jeromechoo committed May 20, 2024
1 parent 7b3969b commit 6979839
Show file tree
Hide file tree
Showing 12 changed files with 1,433 additions and 39 deletions.
4 changes: 2 additions & 2 deletions docs/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -69,9 +69,9 @@ md-sync:
generate-references:
$(PYTHON) scripts/generate_api_reference_links.py --docs_dir $(OUTPUT_NEW_DOCS_DIR)

build: install-py-deps generate-files copy-infra render md-sync generate-references
build: install-py-deps generate-files copy-infra render md-sync

vercel-build: install-vercel-deps build
vercel-build: install-vercel-deps build generate-references
rm -rf docs
mv $(OUTPUT_NEW_DOCS_DIR) docs
rm -rf build
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/how_to/binding.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
"id": "711752cb-4f15-42a3-9838-a0c67f397771",
"metadata": {},
"source": [
"# How to attach runtime arguments to a Runnable\n",
"# How to add default invocation args to a Runnable\n",
"\n",
":::info Prerequisites\n",
"\n",
Expand Down
200 changes: 200 additions & 0 deletions docs/docs/how_to/dynamic_chain.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,200 @@
{
"cells": [
{
"cell_type": "raw",
"id": "77bf57fb-e990-45f2-8b5f-c76388b05966",
"metadata": {},
"source": [
"---\n",
"keywords: [LCEL]\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "50d57bf2-7104-4570-b3e5-90fd71e1bea1",
"metadata": {},
"source": [
"# How to create a dynamic (self-constructing) chain\n",
"\n",
":::info Prerequisites\n",
"\n",
"This guide assumes familiarity with the following:\n",
"- [LangChain Expression Language (LCEL)](/docs/concepts/#langchain-expression-language)\n",
"- [How to turn any function into a runnable](/docs/how_to/functions)\n",
"\n",
":::\n",
"\n",
"Sometimes we want to construct parts of a chain at runtime, depending on the chain inputs ([routing](/docs/how_to/routing/) is the most common example of this). We can create dynamic chains like this using a very useful property of RunnableLambda's, which is that if a RunnableLambda returns a Runnable, that Runnable is itself invoked. Let's see an example.\n",
"\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n",
"<ChatModelTabs\n",
" customVarName=\"llm\"\n",
"/>\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "406bffc2-86d0-4cb9-9262-5c1e3442397a",
"metadata": {},
"outputs": [],
"source": [
"# | echo: false\n",
"\n",
"from langchain_anthropic import ChatAnthropic\n",
"\n",
"llm = ChatAnthropic(model=\"claude-3-sonnet-20240229\")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "0ae6692b-983e-40b8-aa2a-6c078d945b9e",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"According to the context provided, Egypt's population in 2024 is estimated to be about 111 million.\""
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables import Runnable, RunnablePassthrough, chain\n",
"\n",
"contextualize_instructions = \"\"\"Convert the latest user question into a standalone question given the chat history. Don't answer the question, return the question and nothing else (no descriptive text).\"\"\"\n",
"contextualize_prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", contextualize_instructions),\n",
" (\"placeholder\", \"{chat_history}\"),\n",
" (\"human\", \"{question}\"),\n",
" ]\n",
")\n",
"contextualize_question = contextualize_prompt | llm | StrOutputParser()\n",
"\n",
"qa_instructions = (\n",
" \"\"\"Answer the user question given the following context:\\n\\n{context}.\"\"\"\n",
")\n",
"qa_prompt = ChatPromptTemplate.from_messages(\n",
" [(\"system\", qa_instructions), (\"human\", \"{question}\")]\n",
")\n",
"\n",
"\n",
"@chain\n",
"def contextualize_if_needed(input_: dict) -> Runnable:\n",
" if input_.get(\"chat_history\"):\n",
" # NOTE: This is returning another Runnable, not an actual output.\n",
" return contextualize_question\n",
" else:\n",
" return RunnablePassthrough()\n",
"\n",
"\n",
"@chain\n",
"def fake_retriever(input_: dict) -> str:\n",
" return \"egypt's population in 2024 is about 111 million\"\n",
"\n",
"\n",
"full_chain = (\n",
" RunnablePassthrough.assign(question=contextualize_if_needed).assign(\n",
" context=fake_retriever\n",
" )\n",
" | qa_prompt\n",
" | llm\n",
" | StrOutputParser()\n",
")\n",
"\n",
"full_chain.invoke(\n",
" {\n",
" \"question\": \"what about egypt\",\n",
" \"chat_history\": [\n",
" (\"human\", \"what's the population of indonesia\"),\n",
" (\"ai\", \"about 276 million\"),\n",
" ],\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "5076ddb4-4a99-47ad-b549-8ac27ca3e2c6",
"metadata": {},
"source": [
"The key here is that `contextualize_if_needed` returns another Runnable and not an actual output. This returned Runnable is itself run when the full chain is executed.\n",
"\n",
"Looking at the trace we can see that, since we passed in chat_history, we executed the contextualize_question chain as part of the full chain: https://smith.langchain.com/public/9e0ae34c-4082-4f3f-beed-34a2a2f4c991/r"
]
},
{
"cell_type": "markdown",
"id": "4fe6ca44-a643-4859-a290-be68403f51f0",
"metadata": {},
"source": [
"Note that the streaming, batching, etc. capabilities of the returned Runnable are all preserved"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "6def37fa-5105-4090-9b07-77cb488ecd9c",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"What\n",
" is\n",
" the\n",
" population\n",
" of\n",
" Egypt\n",
"?\n"
]
}
],
"source": [
"for chunk in contextualize_if_needed.stream(\n",
" {\n",
" \"question\": \"what about egypt\",\n",
" \"chat_history\": [\n",
" (\"human\", \"what's the population of indonesia\"),\n",
" (\"ai\", \"about 276 million\"),\n",
" ],\n",
" }\n",
"):\n",
" print(chunk)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "poetry-venv-2",
"language": "python",
"name": "poetry-venv-2"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
14 changes: 12 additions & 2 deletions docs/docs/how_to/fallbacks.ipynb
Original file line number Diff line number Diff line change
@@ -1,11 +1,21 @@
{
"cells": [
{
"cell_type": "raw",
"id": "018f3868-e60d-4db6-a1c6-c6633c66b1f4",
"metadata": {},
"source": [
"---\n",
"keywords: [LCEL, fallbacks]\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "19c9cbd6",
"metadata": {},
"source": [
"# Fallbacks\n",
"# How to add fallbacks to a runnable\n",
"\n",
"When working with language models, you may often encounter issues from the underlying APIs, whether these be rate limiting or downtime. Therefore, as you go to move your LLM applications into production it becomes more and more important to safeguard against these. That's why we've introduced the concept of fallbacks. \n",
"\n",
Expand Down Expand Up @@ -447,7 +457,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.5"
"version": "3.9.1"
}
},
"nbformat": 4,
Expand Down
24 changes: 13 additions & 11 deletions docs/docs/how_to/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -19,27 +19,29 @@ For comprehensive descriptions of every class and function see the [API Referenc

This highlights functionality that is core to using LangChain.

- [How to: return structured data from an LLM](/docs/how_to/structured_output/)
- [How to: use a chat model to call tools](/docs/how_to/tool_calling/)
- [How to: return structured data from a model](/docs/how_to/structured_output/)
- [How to: use a model to call tools](/docs/how_to/tool_calling/)
- [How to: stream runnables](/docs/how_to/streaming)
- [How to: debug your LLM apps](/docs/how_to/debugging/)

## LangChain Expression Language (LCEL)

LangChain Expression Language is a way to create arbitrary custom chains. It is built on the [Runnable](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html) protocol.
[LangChain Expression Language](/docs/concepts/#langchain-expression-language-lcel) is a way to create arbitrary custom chains. It is built on the [Runnable](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html) protocol.

[**LCEL cheatsheet**](/docs/how_to/lcel_cheatsheet/): For a quick overview of how to use the main LCEL primitives.

- [How to: chain runnables](/docs/how_to/sequence)
- [How to: stream runnables](/docs/how_to/streaming)
- [How to: invoke runnables in parallel](/docs/how_to/parallel/)
- [How to: attach runtime arguments to a runnable](/docs/how_to/binding/)
- [How to: run custom functions](/docs/how_to/functions)
- [How to: pass through arguments from one step to the next](/docs/how_to/passthrough)
- [How to: add values to a chain's state](/docs/how_to/assign)
- [How to: configure a chain at runtime](/docs/how_to/configure)
- [How to: add message history](/docs/how_to/message_history)
- [How to: route execution within a chain](/docs/how_to/routing)
- [How to: add default invocation args to runnables](/docs/how_to/binding/)
- [How to: turn any function into a runnable](/docs/how_to/functions)
- [How to: pass through inputs from one chain step to the next](/docs/how_to/passthrough)
- [How to: configure runnable behavior at runtime](/docs/how_to/configure)
- [How to: add message history (memory) to a chain](/docs/how_to/message_history)
- [How to: route between sub-chains](/docs/how_to/routing)
- [How to: create a dynamic (self-constructing) chain](/docs/how_to/dynamic_chain/)
- [How to: inspect runnables](/docs/how_to/inspect)
- [How to: add fallbacks](/docs/how_to/fallbacks)
- [How to: add fallbacks to a runnable](/docs/how_to/fallbacks)

## Components

Expand Down
Loading

0 comments on commit 6979839

Please sign in to comment.