forked from langchain-ai/langchain
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
docs: lcel how to and cheatsheet (langchain-ai#21851)
- Loading branch information
Showing
12 changed files
with
1,433 additions
and
39 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,200 @@ | ||
{ | ||
"cells": [ | ||
{ | ||
"cell_type": "raw", | ||
"id": "77bf57fb-e990-45f2-8b5f-c76388b05966", | ||
"metadata": {}, | ||
"source": [ | ||
"---\n", | ||
"keywords: [LCEL]\n", | ||
"---" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"id": "50d57bf2-7104-4570-b3e5-90fd71e1bea1", | ||
"metadata": {}, | ||
"source": [ | ||
"# How to create a dynamic (self-constructing) chain\n", | ||
"\n", | ||
":::info Prerequisites\n", | ||
"\n", | ||
"This guide assumes familiarity with the following:\n", | ||
"- [LangChain Expression Language (LCEL)](/docs/concepts/#langchain-expression-language)\n", | ||
"- [How to turn any function into a runnable](/docs/how_to/functions)\n", | ||
"\n", | ||
":::\n", | ||
"\n", | ||
"Sometimes we want to construct parts of a chain at runtime, depending on the chain inputs ([routing](/docs/how_to/routing/) is the most common example of this). We can create dynamic chains like this using a very useful property of RunnableLambda's, which is that if a RunnableLambda returns a Runnable, that Runnable is itself invoked. Let's see an example.\n", | ||
"\n", | ||
"```{=mdx}\n", | ||
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n", | ||
"\n", | ||
"<ChatModelTabs\n", | ||
" customVarName=\"llm\"\n", | ||
"/>\n", | ||
"```" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 4, | ||
"id": "406bffc2-86d0-4cb9-9262-5c1e3442397a", | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"# | echo: false\n", | ||
"\n", | ||
"from langchain_anthropic import ChatAnthropic\n", | ||
"\n", | ||
"llm = ChatAnthropic(model=\"claude-3-sonnet-20240229\")" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 10, | ||
"id": "0ae6692b-983e-40b8-aa2a-6c078d945b9e", | ||
"metadata": {}, | ||
"outputs": [ | ||
{ | ||
"data": { | ||
"text/plain": [ | ||
"\"According to the context provided, Egypt's population in 2024 is estimated to be about 111 million.\"" | ||
] | ||
}, | ||
"execution_count": 10, | ||
"metadata": {}, | ||
"output_type": "execute_result" | ||
} | ||
], | ||
"source": [ | ||
"from langchain_core.output_parsers import StrOutputParser\n", | ||
"from langchain_core.prompts import ChatPromptTemplate\n", | ||
"from langchain_core.runnables import Runnable, RunnablePassthrough, chain\n", | ||
"\n", | ||
"contextualize_instructions = \"\"\"Convert the latest user question into a standalone question given the chat history. Don't answer the question, return the question and nothing else (no descriptive text).\"\"\"\n", | ||
"contextualize_prompt = ChatPromptTemplate.from_messages(\n", | ||
" [\n", | ||
" (\"system\", contextualize_instructions),\n", | ||
" (\"placeholder\", \"{chat_history}\"),\n", | ||
" (\"human\", \"{question}\"),\n", | ||
" ]\n", | ||
")\n", | ||
"contextualize_question = contextualize_prompt | llm | StrOutputParser()\n", | ||
"\n", | ||
"qa_instructions = (\n", | ||
" \"\"\"Answer the user question given the following context:\\n\\n{context}.\"\"\"\n", | ||
")\n", | ||
"qa_prompt = ChatPromptTemplate.from_messages(\n", | ||
" [(\"system\", qa_instructions), (\"human\", \"{question}\")]\n", | ||
")\n", | ||
"\n", | ||
"\n", | ||
"@chain\n", | ||
"def contextualize_if_needed(input_: dict) -> Runnable:\n", | ||
" if input_.get(\"chat_history\"):\n", | ||
" # NOTE: This is returning another Runnable, not an actual output.\n", | ||
" return contextualize_question\n", | ||
" else:\n", | ||
" return RunnablePassthrough()\n", | ||
"\n", | ||
"\n", | ||
"@chain\n", | ||
"def fake_retriever(input_: dict) -> str:\n", | ||
" return \"egypt's population in 2024 is about 111 million\"\n", | ||
"\n", | ||
"\n", | ||
"full_chain = (\n", | ||
" RunnablePassthrough.assign(question=contextualize_if_needed).assign(\n", | ||
" context=fake_retriever\n", | ||
" )\n", | ||
" | qa_prompt\n", | ||
" | llm\n", | ||
" | StrOutputParser()\n", | ||
")\n", | ||
"\n", | ||
"full_chain.invoke(\n", | ||
" {\n", | ||
" \"question\": \"what about egypt\",\n", | ||
" \"chat_history\": [\n", | ||
" (\"human\", \"what's the population of indonesia\"),\n", | ||
" (\"ai\", \"about 276 million\"),\n", | ||
" ],\n", | ||
" }\n", | ||
")" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"id": "5076ddb4-4a99-47ad-b549-8ac27ca3e2c6", | ||
"metadata": {}, | ||
"source": [ | ||
"The key here is that `contextualize_if_needed` returns another Runnable and not an actual output. This returned Runnable is itself run when the full chain is executed.\n", | ||
"\n", | ||
"Looking at the trace we can see that, since we passed in chat_history, we executed the contextualize_question chain as part of the full chain: https://smith.langchain.com/public/9e0ae34c-4082-4f3f-beed-34a2a2f4c991/r" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"id": "4fe6ca44-a643-4859-a290-be68403f51f0", | ||
"metadata": {}, | ||
"source": [ | ||
"Note that the streaming, batching, etc. capabilities of the returned Runnable are all preserved" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 11, | ||
"id": "6def37fa-5105-4090-9b07-77cb488ecd9c", | ||
"metadata": {}, | ||
"outputs": [ | ||
{ | ||
"name": "stdout", | ||
"output_type": "stream", | ||
"text": [ | ||
"What\n", | ||
" is\n", | ||
" the\n", | ||
" population\n", | ||
" of\n", | ||
" Egypt\n", | ||
"?\n" | ||
] | ||
} | ||
], | ||
"source": [ | ||
"for chunk in contextualize_if_needed.stream(\n", | ||
" {\n", | ||
" \"question\": \"what about egypt\",\n", | ||
" \"chat_history\": [\n", | ||
" (\"human\", \"what's the population of indonesia\"),\n", | ||
" (\"ai\", \"about 276 million\"),\n", | ||
" ],\n", | ||
" }\n", | ||
"):\n", | ||
" print(chunk)" | ||
] | ||
} | ||
], | ||
"metadata": { | ||
"kernelspec": { | ||
"display_name": "poetry-venv-2", | ||
"language": "python", | ||
"name": "poetry-venv-2" | ||
}, | ||
"language_info": { | ||
"codemirror_mode": { | ||
"name": "ipython", | ||
"version": 3 | ||
}, | ||
"file_extension": ".py", | ||
"mimetype": "text/x-python", | ||
"name": "python", | ||
"nbconvert_exporter": "python", | ||
"pygments_lexer": "ipython3", | ||
"version": "3.9.1" | ||
} | ||
}, | ||
"nbformat": 4, | ||
"nbformat_minor": 5 | ||
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.