diff --git a/docs/_sidebar.md b/docs/_sidebar.md index b3b29ff8..a89e85d4 100644 --- a/docs/_sidebar.md +++ b/docs/_sidebar.md @@ -1,14 +1,22 @@ -- [Get started](/) +- [Get started](README.md) - [Installation](/get_started/installation.md) - [Quickstart](/get_started/quickstart.md) - [Security](/get_started/security.md) - [LangChain Expression Language](/expression_language/expression_language.md) - [Get started](/expression_language/get_started.md) - - [Interface](/expression_language/interface.md) + - [Runnable interface](/expression_language/interface.md) + - [Primitives](/expression_language/primitives.md) + - [Sequence: Chaining runnables](/expression_language/primitives/sequence.md) + - [Map: Formatting inputs & concurrency](/expression_language/primitives/map.md) + - [Passthrough: Passing inputs through](/expression_language/primitives/passthrough.md) + - [Mapper: Mapping inputs](/expression_language/primitives/mapper.md) + - [Function: Run custom logic](/expression_language/primitives/function.md) + - [Binding: Configuring runnables](/expression_language/primitives/binding.md) + - [Router: Routing inputs](/expression_language/primitives/router.md) + - [Streaming](/expression_language/streaming.md) - Cookbook - [Prompt + LLM](/expression_language/cookbook/prompt_llm_parser.md) - [Multiple chains](/expression_language/cookbook/multiple_chains.md) - - [Route logic based on input](/expression_language/cookbook/routing.md) - [Adding memory](/expression_language/cookbook/adding_memory.md) - [Retrieval](/expression_language/cookbook/retrieval.md) - [Using Tools](/expression_language/cookbook/tools.md) diff --git a/docs/expression_language/expression_language.md b/docs/expression_language/expression_language.md index d6d9494d..d9b77a7c 100644 --- a/docs/expression_language/expression_language.md +++ b/docs/expression_language/expression_language.md @@ -2,7 +2,7 @@ LangChain Expression Language, or LCEL, is a declarative way to easily compose chains together. LCEL was designed from day 1 to support putting prototypes in production, with no code changes, from the simplest “prompt + LLM” chain to the most complex chains (we’ve seen folks successfully run LCEL chains with 100s of steps in production). To highlight a few of the reasons you might want to use LCEL: -- **Streaming support:** When you build your chains with LCEL you get the best possible time-to-first-token (time elapsed until the first chunk of output comes out). For some chains this means eg. we stream tokens straight from an LLM to a streaming output parser, and you get back parsed, incremental chunks of output at the same rate as the LLM provider outputs the raw tokens. -- **Optimized parallel execution:** Whenever your LCEL chains have steps that can be executed in parallel (eg if you fetch documents from multiple retrievers) we automatically do it for the smallest possible latency. +- **First-class streaming support:** When you build your chains with LCEL you get the best possible time-to-first-token (time elapsed until the first chunk of output comes out). For some chains this means eg. we stream tokens straight from an LLM to a streaming output parser, and you get back parsed, incremental chunks of output at the same rate as the LLM provider outputs the raw tokens. +- **Optimized concurrent execution:** Whenever your LCEL chains have steps that can be executed concurrently (eg if you fetch documents from multiple retrievers) we automatically do it for the smallest possible latency. - **Retries and fallbacks:** Configure retries and fallbacks for any part of your LCEL chain. This is a great way to make your chains more reliable at scale. - **Access intermediate results:** For more complex chains it’s often very useful to access the results of intermediate steps even before the final output is produced. This can be used to let end-users know something is happening, or even just to debug your chain. diff --git a/docs/expression_language/get_started.md b/docs/expression_language/get_started.md index f5e0cfa8..70c12b9a 100644 --- a/docs/expression_language/get_started.md +++ b/docs/expression_language/get_started.md @@ -96,15 +96,50 @@ To follow the steps along: 3. The `model` component takes the generated prompt, and passes into the OpenAI chat model for evaluation. The generated output from the model is a `ChatMessage` object (specifically an `AIChatMessage`). 4. Finally, the `outputParser` component takes in a `ChatMessage`, and transforms this into a `String`, which is returned from the invoke method. +![Pipeline](img/pipeline.png) + Note that if you’re curious about the output of any components, you can always test out a smaller version of the chain such as `promptTemplate` or `promptTemplate.pipe(model)` to see the intermediate results. +```dart +final input = {'topic': 'ice cream'}; + +final res1 = await promptTemplate.invoke(input); +print(res1.toChatMessages()); +// [HumanChatMessage{ +// content: ChatMessageContentText{ +// text: Tell me a joke about ice cream, +// }, +// }] + +final res2 = await promptTemplate.pipe(model).invoke(input); +print(res2); +// ChatResult{ +// id: chatcmpl-9J37Tnjm1dGUXqXBF98k7jfexATZW, +// output: AIChatMessage{ +// content: Why did the ice cream cone go to therapy? Because it had too many sprinkles of emotional issues!, +// }, +// finishReason: FinishReason.stop, +// metadata: { +// model: gpt-3.5-turbo-0125, +// created: 1714327251, +// system_fingerprint: fp_3b956da36b +// }, +// usage: LanguageModelUsage{ +// promptTokens: 14, +// promptBillableCharacters: null, +// responseTokens: 21, +// responseBillableCharacters: null, +// totalTokens: 35 +// }, +// streaming: false +// } +``` + ## RAG Search Example For our next example, we want to run a retrieval-augmented generation chain to add some context when responding to questions. ```dart -final openaiApiKey = Platform.environment['OPENAI_API_KEY']; - // 1. Create a vector store and add documents to it final vectorStore = MemoryVectorStore( embeddings: OpenAIEmbeddings(apiKey: openaiApiKey), @@ -116,25 +151,28 @@ await vectorStore.addDocuments( ], ); -// 2. Construct a RAG prompt template +// 2. Define the retrieval chain +final retriever = vectorStore.asRetriever(); +final setupAndRetrieval = Runnable.fromMap({ + 'context': retriever.pipe( + Runnable.mapInput((docs) => docs.map((d) => d.pageContent).join('\n')), + ), + 'question': Runnable.passthrough(), +}); + +// 3. Construct a RAG prompt template final promptTemplate = ChatPromptTemplate.fromTemplates([ (ChatMessageType.system, 'Answer the question based on only the following context:\n{context}'), (ChatMessageType.human, '{question}'), ]); -// 3. Create a Runnable that combines the retrieved documents into a single string -final docCombiner = Runnable.fromFunction, String>((docs, _) { - return docs.map((final d) => d.pageContent).join('\n'); -}); - -// 4. Define the RAG pipeline -final chain = Runnable.fromMap({ - 'context': vectorStore.asRetriever().pipe(docCombiner), - 'question': Runnable.passthrough(), -}) +// 4. Define the final chain +final model = ChatOpenAI(apiKey: openaiApiKey); +const outputParser = StringOutputParser(); +final chain = setupAndRetrieval .pipe(promptTemplate) - .pipe(ChatOpenAI(apiKey: openaiApiKey)) - .pipe(StringOutputParser()); + .pipe(model) + .pipe(outputParser); // 5. Run the pipeline final res = await chain.invoke('Who created LangChain.dart?'); @@ -142,18 +180,50 @@ print(res); // David created LangChain.dart ``` -In this chain we add some extra logic around retrieving context from a vector store. +In this case, the composed chain is: + +```dart +final chain = setupAndRetrieval + .pipe(promptTemplate) + .pipe(model) + .pipe(outputParser); +``` + +To explain this, we first can see that the prompt template above takes in `context` and `question` as values to be substituted in the prompt. Before building the prompt template, we want to retrieve relevant documents to the search and include them as part of the context. + +As a preliminary step, we’ve set up the retriever using an in memory store, which can retrieve documents based on a query. This is a runnable component as well that can be chained together with other components, but you can also try to run it separately: -We first instantiate our vector store and add some documents to it. Then we define our prompt, which takes in two input variables: +```dart +final res1 = await retriever.invoke('Who created LangChain.dart?'); +print(res1); +// [Document{pageContent: David ported LangChain to Dart in LangChain.dart}, +// Document{pageContent: LangChain was created by Harrison, metadata: {}}] +``` + +We then use the `RunnableMap` to prepare the expected inputs into the prompt by using a string containing the combined retrieved documents as well as the original user question, using the `retriever` for document search, a `RunnableMapInput` to combine the documents and `RunnablePassthrough` to pass the user's question: -- `context` -> this is a string which is returned from our vector store based on a semantic search from the input. -- `question` -> this is the question we want to ask. +```dart +final setupAndRetrieval = Runnable.fromMap({ + 'context': retriever.pipe( + Runnable.mapInput((docs) => docs.map((d) => d.pageContent).join('\n')), + ), + 'question': Runnable.passthrough(), +}); +``` -In our `chain`, we use a `RunnableMap` which is special type of runnable that takes an object of runnables and executes them all in parallel. It then returns an object with the same keys as the input object, but with the values replaced with the output of the runnables. +To review, the complete chain is: -In our case, it has two sub-chains to get the data required by our prompt: +```dart +final chain = setupAndRetrieval + .pipe(promptTemplate) + .pipe(model) + .pipe(outputParser); +``` -- `context` -> this is a `RunnableFunction` which takes the input from the `.invoke()` call, makes a request to our vector store, and returns the retrieved documents combined in a single String. -- `question` -> this uses a `RunnablePassthrough` which simply passes whatever the input was through to the next step, and in our case it returns it to the key in the object we defined. +With the flow being: +1. The first steps create a `RunnableMap` object with two entries. The first entry, `context` will include the combined document results fetched by the retriever. The second entry, `question` will contain the user’s original question. To pass on the `question`, we use `RunnablePassthrough` to copy this entry. +2. Feed the map from the step above to the `promptTemplate` component. It then takes the user input which is `question` as well as the retrieved documents which is `context` to construct a prompt and output a `PromptValue`. +3. The `model` component takes the generated prompt, and passes into the OpenAI LLM model for evaluation. The generated `output` from the model is a `ChatResult` object. +4. Finally, the `outputParser` component takes in the `ChatResult`, and transforms this into a Dart String, which is returned from the invoke method. -Finally, we chain together the prompt, model, and output parser as before. +![RAG Pipeline](img/rag_pipeline.png) diff --git a/docs/expression_language/img/pipeline.png b/docs/expression_language/img/pipeline.png new file mode 100644 index 00000000..f500b05f Binary files /dev/null and b/docs/expression_language/img/pipeline.png differ diff --git a/docs/expression_language/img/rag_pipeline.png b/docs/expression_language/img/rag_pipeline.png new file mode 100644 index 00000000..f5056e2d Binary files /dev/null and b/docs/expression_language/img/rag_pipeline.png differ diff --git a/docs/expression_language/interface.md b/docs/expression_language/interface.md index f3dbb065..9b7085d8 100644 --- a/docs/expression_language/interface.md +++ b/docs/expression_language/interface.md @@ -1,10 +1,12 @@ -# Interface +# Runnable interface -In an effort to make it as easy as possible to create custom chains, we've implemented a `Runnable` interface that most components implement. This is a standard interface with a few different methods, which makes it easy to define custom chains as well as making it possible to invoke them in a standard way. The standard interface exposed includes: +To make it as easy as possible to create custom chains, LangChain provides a `Runnable` interface that most components implement, including chat models, LLMs, output parsers, retrievers, prompt templates, and more. -- `invoke`: call the chain on an input. -- `stream`: stream back chunks of the response. -- `batch`: call the chain on a list of inputs. +This is a standard interface, which makes it easy to define custom chains as well as invoke them in a standard way. The standard interface includes: + +- `invoke`: call the chain on an input and return the output. +- `stream`: call the chain on an input and stream the output. +- `batch`: call the chain on a list of inputs and return a list of outputs. The type of the input and output varies by component: @@ -12,58 +14,32 @@ The type of the input and output varies by component: |-----------------------------|------------------------|------------------------| | `PromptTemplate` | `Map` | `PromptValue` | | `ChatMessagePromptTemplate` | `Map` | `PromptValue` | -| `Retriever` | `String` | `List` | -| `DocumentTransformer` | `List` | `List` | | `LLM` | `PromptValue` | `LLMResult` | | `ChatModel` | `PromptValue` | `ChatResult` | -| `Chain` | `Map` | `Map` | -| `OutputParser` | Runnable input type | Parser output type | +| `OutputParser` | Any object | Parser output type | +| `Retriever` | `String` | `List` | +| `DocumentTransformer` | `List` | `List` | | `Tool` | `Map` | `String` | -| `RunnableSequence` | Fist input type | Last output type | -| `RunnableMap` | Runnable input type | `Map` | -| `RunnableBinding` | Runnable input type | Runnable output type | -| `RunnableFunction` | Runnable input type | Runnable output type | -| `RunnableRouter` | Runnable input type | Runnable output type | -| `RunnablePassthrough` | Runnable input type | Runnable input type | -| `RunnableItemFromMap` | `Map` | Runnable output type | -| `RunnableMapFromInput` | Runnable input type | `Map` | -| `RunnableMapInput` | Runnable input type | Runnable output type | - -You can combine `Runnable` objects into sequences in three ways: - -- Calling `.pipe` method which takes another `Runnable` as an argument. E.g.: - -```dart -final chain = promptTemplate.pipe(chatModel); -``` - -- Using the `|` operator. This is a convenience method that calls `.pipe` under the hood (note that it offers less type safety than `.pipe` because of Dart limitations). E.g.: - -```dart -final chain = promptTemplate | chatModel; -``` +| `Chain` | `Map` | `Map` | -- Using the `Runnable.fromList` static method with a list of `Runnable`, which will run in sequence when invoked. E.g.: - -```dart -final chain = Runnable.fromList([promptTemplate, chatModel]); -``` +There are also several useful primitives for working with runnables, which you can read about in [this section](/expression_language/primitives.md). ## Runnable interface Let's take a look at these methods! To do so, we'll create a super simple `PromptTemplate` + `ChatModel` chain. ```dart -final openaiApiKey = Platform.environment['OPENAI_API_KEY']; final model = ChatOpenAI(apiKey: openaiApiKey); final promptTemplate = ChatPromptTemplate.fromTemplate( 'Tell me a joke about {topic}', ); -final chain = promptTemplate | model | StringOutputParser(); +final chain = promptTemplate.pipe(model).pipe(StringOutputParser()); ``` +In this example, we use the method `pipe` to combine runnables into a sequence. You can read more about this in the [RunnableSequence: Chaining runnables](/expression_language/primitives/sequence.md) section. + ### Invoke The `invoke` method takes an input and returns the output of invoking the chain on that input. @@ -139,302 +115,3 @@ print(res); //['Why did the bear break up with his girlfriend? Because he couldn't bear the relationship anymore!,', // 'Why don't cats play poker in the jungle? Because there's too many cheetahs!'] ``` - -## Runnable types - -The `Runnable` interface is implemented by most components (models, prompt templates, retrievers, etc.). However, there are a few special types of `Runnable` that facilitate the creation of custom chains. - -### RunnableSequence - -A `RunnableSequence` allows you to run multiple `Runnable` objects sequentially, passing the output of the previous `Runnable` to the next one. - -As mentioned above, you can create a `RunnableSequence` in three ways: - -- `.pipe` method -- `|` operator -- `Runnable.fromList` static method - -When you call `invoke` on a `RunnableSequence`, it will invoke each `Runnable` in the sequence in order, passing the output of the previous `Runnable` to the next one. The output of the last `Runnable` in the sequence is returned. - -You can think of a `RunnableSequence` as the replacement for `SequentialChain`. - -Example: - -```dart -final openaiApiKey = Platform.environment['OPENAI_API_KEY']; -final model = ChatOpenAI(apiKey: openaiApiKey); - -final promptTemplate = ChatPromptTemplate.fromTemplate( - 'Tell me a joke about {topic}', -); - -// The following three chains are equivalent: -final chain1 = promptTemplate | model | StringOutputParser(); -final chain2 = promptTemplate.pipe(model).pipe(StringOutputParser()); -final chain3 = Runnable.fromList( - [promptTemplate, model, StringOutputParser()], -); - -final res = await chain1.invoke({'topic': 'bears'}); -print(res); -// Why don't bears wear shoes?\n\nBecause they have bear feet! -``` - -### RunnableMap - -A `RunnableMap` allows you to run multiple `Runnable` objects in parallel on the same input returning a map of the results. - -You can create a `RunnableMap` using the `Runnable.fromMap` static method. - -When you call `invoke` on a `RunnableMap`, it will invoke each `Runnable` in the map in parallel, passing the same input to each one. The output of each `Runnable` is returned in a map, where the keys are the names of the outputs. - -Example: - -```dart -final openaiApiKey = Platform.environment['OPENAI_API_KEY']; -final model = ChatOpenAI(apiKey: openaiApiKey); - -final promptTemplate1 = ChatPromptTemplate.fromTemplate( - 'What is the city {person} is from?', -); -final promptTemplate2 = ChatPromptTemplate.fromTemplate( - 'How old is {person}?', -); -final promptTemplate3 = ChatPromptTemplate.fromTemplate( - 'Is {city} a good city for a {age} years old person?', -); -const stringOutputParser = StringOutputParser(); - -final chain = Runnable.fromMap({ - 'city': promptTemplate1 | model | stringOutputParser, - 'age': promptTemplate2 | model | stringOutputParser, -}) | promptTemplate3 | model | stringOutputParser; - -final res = await chain.invoke({'person': 'Elon Musk'}); -print(res); -// It is subjective to determine whether Pretoria, South Africa, is a good city for a 50-year-old person as it depends on individual preferences and needs. -``` - -### RunnableBinding - -A `RunnableBinding` allows you to run a `Runnable` object with a set of options. - -You can create a `RunnableBinding` using the `Runnable.bind` method. - -When you call `invoke` on a `RunnableBinding`, it will invoke the `Runnable` with the options passed to `bind`. - -Example: - -```dart -final openaiApiKey = Platform.environment['OPENAI_API_KEY']; -final model = ChatOpenAI(apiKey: openaiApiKey); - -final promptTemplate = ChatPromptTemplate.fromTemplate( - 'Tell me a joke about {foo}', -); - -final chain = promptTemplate | - model.bind(const ChatOpenAIOptions(stop: ['\n'])) | - StringOutputParser(); - -final res = await chain.invoke({'foo': 'bears'}); -print(res); -// Why don't bears wear shoes? -``` - -### RunnableFunction - -A `RunnableFunction` allows you to run a Dart function as part of a chain. - -You can create a `RunnableFunction` using the `Runnable.fromFunction` static method. - -When you call `invoke` on a `RunnableFunction`, it will invoke the function, passing the input to it. The output of the function is returned. - -Example: - -```dart -final openaiApiKey = Platform.environment['OPENAI_API_KEY']; -final model = ChatOpenAI(apiKey: openaiApiKey); - -final promptTemplate = ChatPromptTemplate.fromTemplate( - 'How much is {a} + {b}?', -); - -final chain = Runnable.fromMap({ - 'a': Runnable.fromFunction(( - final Map input, - final options, - ) async { - final foo = input['foo'] ?? ''; - return '${foo.length}'; - }), - 'b': Runnable.fromFunction(( - final Map input, - final options, - ) async { - final foo = input['foo'] ?? ''; - final bar = input['bar'] ?? ''; - return '${bar.length * foo.length}'; - }), - }) | - promptTemplate | - model | - StringOutputParser(); - -final res = await chain.invoke({'foo': 'foo', 'bar': 'bar'}); -print(res); -// 3 + 9 = 12 -``` - -### RunnableRouter - -A `RunnableRouter` takes the input it receives and routes it to the runnable specified by the `router` function. - -You can create a `RunnableRouter` using the `Runnable.router` static method. - -When you call `invoke` on a `RunnableRouter`, it will take the input it receives and return the output of the `router` function. - -Example: -```dart -final router = Runnable.fromRouter((Map input, _) { - return switch(input['topic'] as String) { - 'langchain' => langchainChain, - 'anthropic' => anthropicChain, - _ => generalChain, - }; -}); -final fullChain = Runnable.fromMap({ - 'topic': classificationChain, - 'question': Runnable.getItemFromMap('question'), - }).pipe(router); -final res2 = await fullChain.invoke({ - 'question': 'how do I use Anthropic?', -}); -print(res2); -// As Dario Amodei told me, using Anthropic is a straightforward process that... -``` - -Check the [Routing guide](cookbook/routing.md) for more information. - -### RunnablePassthrough - -A `RunnablePassthrough` takes the input it receives and passes it through as output. - -You can create a `RunnablePassthrough` using the `Runnable.passthrough` static method. - -When you call `invoke` on a `RunnablePassthrough`, it will return the input it receives. - -Example: - -```dart -final openaiApiKey = Platform.environment['OPENAI_API_KEY']; -final model = ChatOpenAI(apiKey: openaiApiKey); - -final promptTemplate = ChatPromptTemplate.fromTemplate( - 'Tell me a joke about {foo}', -); - -final map = Runnable.fromMap({ - 'foo': Runnable.passthrough(), -}); -final chain = map | promptTemplate | model | StringOutputParser(); - -final res = await chain.invoke('bears'); -print(res); -// Why don't bears wear shoes? Because they have bear feet! -``` - -### RunnableItemFromMap - -A `RunnableItemFromMap` allows you to get a value from the input. - -You can create a `RunnableItemFromMap` using the `Runnable.getItemFromMap` static method. - -When you call `invoke` on a `RunnableItemFromMap`, it will take the input it receives and returns the value of the given key. - -Example: - -```dart -final promptTemplate = ChatPromptTemplate.fromTemplate(''' -Answer the question based only on the following context: -{context} - -Question: {question} - -Answer in the following language: {language}'''); - -final chain = Runnable.fromMap({ - 'context': Runnable.getItemFromMap('question') | - (retriever | Runnable.fromFunction((docs, _) => docs.join('\n'))), - 'question': Runnable.getItemFromMap('question'), - 'language': Runnable.getItemFromMap('language'), - }) | - promptTemplate | - model | - StringOutputParser(); - -final res = await chain.invoke({ - 'question': 'What payment methods do you accept?', - 'language': 'es_ES', -}); -print(res); -// Aceptamos los siguientes métodos de pago: iDEAL, PayPal y tarjeta de crédito. -``` - -### RunnableMapFromInput - -A `RunnableMapFromInput` allows you to output a map with the given key and the input as value. - -You can create a `RunnableMapFromInput` using the `Runnable.getMapFromInput` static method. - -When you call `invoke` on a `RunnableMapFromInput`, it will take the input it receives and returns a map with the given key and the input as value. - -It is equivalent to: - -```dart -Runnable.fromMap({ - 'key': Runnable.passthrough(), -}) -``` - -Example: - -```dart -final openaiApiKey = Platform.environment['OPENAI_API_KEY']; -final model = ChatOpenAI(apiKey: openaiApiKey); - -final promptTemplate = ChatPromptTemplate.fromTemplate( - 'Tell me a joke about {foo}', -); - -final chain = Runnable.getMapFromInput('foo') | - promptTemplate | - model | - StringOutputParser(); - -final res = await chain.invoke('bears'); -print(res); -// Why don't bears wear shoes? Because they have bear feet! -``` - -### RunnableMapInput - -A `RunnableMapInput` allows you to map the input to a different value. - -You can create a `RunnableMapInput` using the `Runnable.mapInput` static method. - -When you call `invoke` on a `RunnableMapInput`, it will take the input it receives and returns the output returned by the given `inputMapper` function. - -Example: - -```dart -final agent = Agent.fromRunnable( - Runnable.mapInput( - (final AgentPlanInput planInput) => { - 'input': planInput.inputs['input'], - 'agent_scratchpad': buildScratchpad(planInput.intermediateSteps), - }, - ).pipe(prompt).pipe(model).pipe(outputParser), - tools: [tool], -); -``` diff --git a/docs/expression_language/primitives.md b/docs/expression_language/primitives.md new file mode 100644 index 00000000..89d618e4 --- /dev/null +++ b/docs/expression_language/primitives.md @@ -0,0 +1,13 @@ +# Primitives + +In addition to various components that are usable with LCEL, LangChain also includes various primitives that help pass around and format data, bind arguments, invoke custom logic, and more. + +This section goes into greater depth on where and how some of these components are useful. + +- [Sequence: Chaining runnables](/expression_language/primitives/sequence.md) +- [Map: Formatting inputs & concurrency](/expression_language/primitives/map.md) +- [Passthrough: Passing inputs through](/expression_language/primitives/passthrough.md) +- [Mapper: Mapping inputs](/expression_language/primitives/mapper.md) +- [Function: Run custom logic](/expression_language/primitives/function.md) +- [Binding: Configuring runnables](/expression_language/primitives/binding.md) +- [Router: Routing inputs](/expression_language/primitives/router.md) diff --git a/docs/expression_language/primitives/binding.md b/docs/expression_language/primitives/binding.md new file mode 100644 index 00000000..3f6b9730 --- /dev/null +++ b/docs/expression_language/primitives/binding.md @@ -0,0 +1,112 @@ +# RunnableBinding: Configuring runnables at runtime + +Sometimes we want to invoke a `Runnable` within a `Runnable` sequence with constant options that are not part of the output of the preceding `Runnable` in the sequence, and which are not part of the user input. We can use `Runnable.bind()` to pass these options in. + +Suppose we have a simple prompt + model sequence: + +```dart +final model = ChatOpenAI(apiKey: openaiApiKey); +const outputParser = StringOutputParser(); + +final promptTemplate = ChatPromptTemplate.fromTemplates([ + (ChatMessageType.system, + 'Write out the following equation using algebraic symbols then solve it. ' + 'Use the format\n\nEQUATION:...\nSOLUTION:...\n\n'), + (ChatMessageType.human, '{equation_statement}'), +]); + +final chain = Runnable.getMapFromInput('equation_statement') + .pipe(promptTemplate) + .pipe(model) + .pipe(outputParser); + +final res = await chain.invoke('x raised to the third plus seven equals 12'); +print(res); +// EQUATION: \(x^3 + 7 = 12\) +// +// SOLUTION: +// Subtract 7 from both sides: +// \(x^3 = 5\) +// +// Take the cube root of both sides: +// \(x = \sqrt[3]{5}\) +``` + +and want to call the model with certain `stop` words: + +```dart +final chain2 = Runnable.getMapFromInput('equation_statement') + .pipe(promptTemplate) + .pipe(model.bind(ChatOpenAIOptions(stop: ['SOLUTION']))) + .pipe(outputParser); +final res2 = await chain2.invoke('x raised to the third plus seven equals 12'); +print(res2); +// EQUATION: \( x^3 + 7 = 12 \) +``` + +You can use this pattern to configure different options for the same runnable without having to create a new instance. For example, you can use different models for different prompts: + +```dart +final chatModel = ChatOpenAI(apiKey: openaiApiKey); +const outputParser = StringOutputParser(); +final prompt1 = PromptTemplate.fromTemplate('How are you {name}?'); +final prompt2 = PromptTemplate.fromTemplate('How old are you {name}?'); + +final chain = Runnable.fromMap({ + 'q1': prompt1 | + chatModel.bind(ChatOpenAIOptions(model: 'gpt-4-turbo')) | + outputParser, + 'q2': prompt2 | + chatModel.bind(ChatOpenAIOptions(model: 'gpt-3.5-turbo')) | + outputParser, +}); + +final res = await chain.invoke({'name': 'David'}); +print(res); +// {q1: Hello! I'm just a computer program, so I don't have feelings, +// q2: I am an AI digital assistant, so I do not have an age like humans do.} +``` + +Another similar use case is to use different `temperature` settings for different parts of the chain. You can easily do this by using `model.bind(ChatOpenAIOptions(temperature: 1))` as shown above. + +## Attaching functions + +One particularly useful application of `Runnable.bind()` is to attach the functions that the model can call. + +```dart +final model = ChatOpenAI(apiKey: openaiApiKey); +final outputParser = JsonOutputFunctionsParser(); + +final promptTemplate = ChatPromptTemplate.fromTemplates([ + (ChatMessageType.system, 'Write out the following equation using algebraic symbols then solve it.'), + (ChatMessageType.human, '{equation_statement}'), +]); + +const function = ChatFunction( + name: 'solver', + description: 'Formulates and solves an equation', + parameters: { + 'type': 'object', + 'properties': { + 'equation': { + 'type': 'string', + 'description': 'The algebraic expression of the equation', + }, + 'solution': { + 'type': 'string', + 'description': 'The solution to the equation', + }, + }, + 'required': ['equation', 'solution'], + }, +); + +final chain = Runnable.getMapFromInput('equation_statement') + .pipe(promptTemplate) + .pipe(model.bind(ChatOpenAIOptions(functions: [function]))) + .pipe(outputParser); + +final res = await chain.invoke('x raised to the third plus seven equals 12'); +print(res); +// {equation: x^3 + 7 = 12, solution: x = 1} +``` diff --git a/docs/expression_language/primitives/function.md b/docs/expression_language/primitives/function.md new file mode 100644 index 00000000..e0b621fd --- /dev/null +++ b/docs/expression_language/primitives/function.md @@ -0,0 +1,175 @@ +# Function: Run custom logic + +As we discussed in the [Mapper: Mapping input values](/expression_language/primitives/map.md) section, it is common to need to map the output value of a previous runnable to a new value that conforms to the input requirements of the next runnable. `Runnable.mapInput`, `Runnable.mapInputStream`, `Runnable.getItemFromMap`, and `Runnable.getMapFromInput` are the easiest way to do that with minimal boilerplate. However, sometimes you may need more control over the input and output values. This is where `Runnable.fromFunction` comes in. + +The main differences between `Runnable.mapInput` and `Runnable.fromFunction` are: +- `Runnable.fromFunction` allows you to define separate logic for invoke vs stream. +- `Runnable.fromFunction` allows you to access the invocation options. + +## Runnable.fromFunction + +In the following example, we use `Runnable.fromFunction` to log the output value of the previous `Runnable`. Note that we have print different messages depending on whether the chain is invoked or streamed. + +```dart +Runnable logOutput(String stepName) { + return Runnable.fromFunction( + invoke: (input, options) { + print('Output from step "$stepName":\n$input\n---'); + return Future.value(input); + }, + stream: (inputStream, options) { + return inputStream.map((input) { + print('Chunk from step "$stepName":\n$input\n---'); + return input; + }); + }, + ); +} + +final promptTemplate = ChatPromptTemplate.fromTemplates(const [ + ( + ChatMessageType.system, + 'Write out the following equation using algebraic symbols then solve it. ' + 'Use the format:\nEQUATION:...\nSOLUTION:...\n', + ), + (ChatMessageType.human, '{equation_statement}'), +]); + +final chain = Runnable.getMapFromInput('equation_statement') + .pipe(logOutput('getMapFromInput')) + .pipe(promptTemplate) + .pipe(logOutput('promptTemplate')) + .pipe(ChatOpenAI(apiKey: openaiApiKey)) + .pipe(logOutput('chatModel')) + .pipe(const StringOutputParser()) + .pipe(logOutput('outputParser')); +``` + +When we invoke the chain, we get the following output: +```dart +await chain.invoke('x raised to the third plus seven equals 12'); +// Output from step "getMapFromInput": +// {equation_statement: x raised to the third plus seven equals 12} +// --- +// Output from step "promptTemplate": +// System: Write out the following equation using algebraic symbols then solve it. Use the format +// +// EQUATION:... +// SOLUTION:... +// +// Human: x raised to the third plus seven equals 12 +// --- +// Output from step "chatModel": +// ChatResult{ +// id: chatcmpl-9JcVxKcryIhASLnpSRMXkOE1t1R9G, +// output: AIChatMessage{ +// content: +// EQUATION: \( x^3 + 7 = 12 \) +// SOLUTION: +// Subtract 7 from both sides of the equation: +// \( x^3 = 5 \) +// +// Take the cube root of both sides: +// \( x = \sqrt[3]{5} \) +// +// Therefore, the solution is \( x = \sqrt[3]{5} \), +// }, +// finishReason: FinishReason.stop, +// metadata: { +// model: gpt-3.5-turbo-0125, +// created: 1714463309, +// system_fingerprint: fp_3b956da36b +// }, +// usage: LanguageModelUsage{ +// promptTokens: 47, +// responseTokens: 76, +// totalTokens: 123 +// }, +// streaming: false +// } +// --- +// Output from step "outputParser": +// EQUATION: \( x^3 + 7 = 12 \) +// +// SOLUTION: +// Subtract 7 from both sides of the equation: +// \( x^3 = 5 \) +// +// Take the cube root of both sides: +// \( x = \sqrt[3]{5} \) +// +// Therefore, the solution is \( x = \sqrt[3]{5} \) +``` + +When we stream the chain, we get the following output: +```dart +chain.stream('x raised to the third plus seven equals 12').listen((_){}); +// Chunk from step "getMapFromInput": +// {equation_statement: x raised to the third plus seven equals 12} +// --- +// Chunk from step "promptTemplate": +// System: Write out the following equation using algebraic symbols then solve it. Use the format: +// EQUATION:... +// SOLUTION:... +// +// Human: x raised to the third plus seven equals 12 +// --- +// Chunk from step "chatModel": +// ChatResult{ +// id: chatcmpl-9JcdKMy2yBlJhW2fxVu43Qn0gqofK, +// output: AIChatMessage{ +// content: E, +// }, +// finishReason: FinishReason.unspecified, +// metadata: { +// model: gpt-3.5-turbo-0125, +// created: 1714463766, +// system_fingerprint: fp_3b956da36b +// }, +// usage: LanguageModelUsage{}, +// streaming: true +// } +// --- +// Chunk from step "outputParser": +// E +// --- +// Chunk from step "chatModel": +// ChatResult{ +// id: chatcmpl-9JcdKMy2yBlJhW2fxVu43Qn0gqofK, +// output: AIChatMessage{ +// content: QU, +// }, +// finishReason: FinishReason.unspecified, +// metadata: { +// model: gpt-3.5-turbo-0125, +// created: 1714463766, +// system_fingerprint: fp_3b956da36b +// }, +// usage: LanguageModelUsage{}, +// streaming: true +// } +// --- +// Chunk from step "outputParser": +// QU +// --- +// Chunk from step "chatModel": +// ChatResult{ +// id: chatcmpl-9JcdKMy2yBlJhW2fxVu43Qn0gqofK, +// output: AIChatMessage{ +// content: ATION, +// }, +// finishReason: FinishReason.unspecified, +// metadata: { +// model: gpt-3.5-turbo-0125, +// created: 1714463766, +// system_fingerprint: fp_3b956da36b +// }, +// usage: LanguageModelUsage{}, +// streaming: true +// } +// --- +// Chunk from step "outputParser": +// ATION +// --- +// ... +``` diff --git a/docs/expression_language/primitives/map.md b/docs/expression_language/primitives/map.md new file mode 100644 index 00000000..07835781 --- /dev/null +++ b/docs/expression_language/primitives/map.md @@ -0,0 +1,109 @@ +# RunnableMap: Formatting inputs & concurrency + +The `RunnableMap` primitive is essentially a map whose values are runnables. It runs all of its values concurrently, and each value is called with the overall input of the `RunnableMap`. The final return value is a map with the results of each value under its appropriate key. + +It is useful for running operations concurrently, but can also be useful for manipulating the output of one `Runnable` to match the input format of the next `Runnable` in a sequence. + +Here the input to prompt is expected to be a map with keys “context” and “question”. The user input is just the question. So we need to get the context using our retriever and passthrough the user input under the “question” key. + +```dart +final vectorStore = MemoryVectorStore( + embeddings: OpenAIEmbeddings(apiKey: openaiApiKey), +); +await vectorStore.addDocuments( + documents: [ + Document(pageContent: 'LangChain was created by Harrison'), + Document(pageContent: 'David ported LangChain to Dart in LangChain.dart'), + ], +); +final retriever = vectorStore.asRetriever(); +final promptTemplate = ChatPromptTemplate.fromTemplates([ + (ChatMessageType.system, 'Answer the question based on only the following context:\n{context}'), + (ChatMessageType.human, '{question}'), +]); +final model = ChatOpenAI(apiKey: openaiApiKey); +const outputParser = StringOutputParser(); + +final retrievalChain = Runnable.fromMap({ + 'context': retriever, + 'question': Runnable.passthrough(), +}).pipe(promptTemplate).pipe(model).pipe(outputParser); + +final res = await retrievalChain.invoke('Who created LangChain.dart?'); +print(res); +// David created LangChain.dart. +``` + +## Using Runnable.getItemFromMap as shorthand + +Sometimes you need to extract one value from a map and pass it to the next `Runnable`. You can use `Runnable.getItemFromMap` to do this. It takes the input map and returns the value of the provided key. + +```dart +final vectorStore = MemoryVectorStore( + embeddings: OpenAIEmbeddings(apiKey: openaiApiKey), +); +await vectorStore.addDocuments( + documents: [ + const Document(pageContent: 'LangChain was created by Harrison'), + const Document( + pageContent: 'David ported LangChain to Dart in LangChain.dart', + ), + ], +); +final retriever = vectorStore.asRetriever(); +final promptTemplate = ChatPromptTemplate.fromTemplates(const [ + ( + ChatMessageType.system, + 'Answer the question based on only the following context:\n{context}\n' + 'Answer in the following language: {language}', + ), + (ChatMessageType.human, '{question}'), +]); +final model = ChatOpenAI(apiKey: openaiApiKey); +const outputParser = StringOutputParser(); + +final retrievalChain = Runnable.fromMap>({ + 'context': Runnable.getItemFromMap('question').pipe(retriever), + 'question': Runnable.getItemFromMap('question'), + 'language': Runnable.getItemFromMap('language'), +}).pipe(promptTemplate).pipe(model).pipe(outputParser); + +final res = await retrievalChain.invoke({ + 'question': 'Who created LangChain.dart?', + 'language': 'Spanish', +}); +print(res); +// David portó LangChain a Dart en LangChain.dart +``` + +## Running steps concurrently + +`RunnableMap` makes it easy to execute multiple `Runnables` concurrently and to return the output of these Runnables as a map. + +```dart +final openaiApiKey = Platform.environment['OPENAI_API_KEY']; +final model = ChatOpenAI(apiKey: openaiApiKey); +const outputParser = StringOutputParser(); + +final jokeChain = PromptTemplate.fromTemplate('tell me a joke about {topic}') + .pipe(model) + .pipe(outputParser); +final poemChain = + PromptTemplate.fromTemplate('write a 2-line poem about {topic}') + .pipe(model) + .pipe(outputParser); + +final mapChain = Runnable.fromMap>({ + 'joke': jokeChain, + 'poem': poemChain, +}); + +final res = await mapChain.invoke({ + 'topic': 'bear', +}); +print(res); +// {joke: Why did the bear bring a flashlight to the party? Because he wanted to be the "light" of the party!, +// poem: In the forest's hush, the bear prowls wide, A silent guardian, a force of nature's pride.} +``` + +Each branch of the `RunnableMap` is still run on the same isolate, but they are run concurrently. In the example above, the two requests to the OpenAI API are made concurrently, without waiting for the first to finish before starting the second. diff --git a/docs/expression_language/primitives/mapper.md b/docs/expression_language/primitives/mapper.md new file mode 100644 index 00000000..2fb57295 --- /dev/null +++ b/docs/expression_language/primitives/mapper.md @@ -0,0 +1,144 @@ +# Mapper: Mapping input values + +It is common to need to map the output value of a previous runnable to a new value that conforms to the input requirements of the next runnable. This is where `Runnable.mapInput` comes in. + +## Runnable.mapInput + +`Runnable.mapInput` allows you to define a function that maps the input value to a new value. + +In the following example, we retrieve a list of `Document` objects from our vector store, and we want to combine them into a single string to feed it in our prompt. To do this, we use `Runnable.mapInput` to implement the combination logic. + +```dart +final vectorStore = MemoryVectorStore( + embeddings: OpenAIEmbeddings(apiKey: openaiApiKey), +); +await vectorStore.addDocuments( + documents: [ + Document(pageContent: 'LangChain was created by Harrison'), + Document(pageContent: 'David ported LangChain to Dart in LangChain.dart'), + ], +); + +final retriever = vectorStore.asRetriever(); +final setupAndRetrieval = Runnable.fromMap({ + 'context': retriever.pipe( + Runnable.mapInput((docs) => docs.map((d) => d.pageContent).join('\n')), + ), + 'question': Runnable.passthrough(), +}); + +final promptTemplate = ChatPromptTemplate.fromTemplates(const [ + (ChatMessageType.system, 'Answer the question based on only the following context:\n{context}'), + (ChatMessageType.human, '{question}'), +]); + +final model = ChatOpenAI(apiKey: openaiApiKey); +const outputParser = StringOutputParser(); +final chain = setupAndRetrieval + .pipe(promptTemplate) + .pipe(model) + .pipe(outputParser); + +final res = await chain.invoke('Who created LangChain.dart?'); +print(res); +// David created LangChain.dart +``` + +## Runnable.mapInputStream + +By default, when running a chain using `stream` instead of `invoke`, `Runnable.mapInput` will be called for every item in the input stream. If you need more control over the input stream, you can use `Runnable.mapInputStream` instead which takes the input stream as a parameter and returns a new stream. + +In the following example, the model streams the output in chunks and the output parser processes each of them individually. However, we want our chain to output only only the last chunk. We can use `Runnable.mapInputStream` to get the last chunk from the input stream. + +```dart +final model = ChatOpenAI( + apiKey: openAiApiKey, + defaultOptions: ChatOpenAIOptions( + responseFormat: ChatOpenAIResponseFormat( + type: ChatOpenAIResponseFormatType.jsonObject, + ), + ), +); +final parser = JsonOutputParser(); +final mapper = Runnable.mapInputStream((Stream> inputStream) async* { + yield await inputStream.last; +}); + +final chain = model.pipe(parser).pipe(mapper); + +final stream = chain.stream( + PromptValue.string( + 'Output a list of the countries france, spain and japan and their ' + 'populations in JSON format. Use a dict with an outer key of ' + '"countries" which contains a list of countries. ' + 'Each country should have the key "name" and "population"', + ), +); +await stream.forEach((final chunk) => print('$chunk|')); +// {countries: [{name: France, population: 65273511}, {name: Spain, population: 46754778}, {name: Japan, population: 126476461}]}| +``` + +> Note: for more complex use-cases where you want to define separate logic for when the chain is run using `invoke` or `stream`, you can use `Runnable.function`. + +## Runnable.getItemFromMap + +Sometimes the previous runnable returns a map, and you want to get a value from it to feed it to the next runnable. You can use `Runnable.getItemFromMap` to get a value from an input map. + +In the following example, we want to feed to our retriever the question but the input is a map with several other values. We can use `Runnable.getItemFromMap` to get the question from the input map, as well as to propagate the other values to the next runnable. + +```dart +final retrievalChain = Runnable.fromMap>({ + 'context': Runnable.getItemFromMap('question').pipe(retriever), + 'question': Runnable.getItemFromMap('question'), + 'language': Runnable.getItemFromMap('language'), +}).pipe(promptTemplate).pipe(model).pipe(outputParser); + +final res = await retrievalChain.invoke({ + 'question': 'Who created LangChain.dart?', + 'language': 'Spanish', +}); +print(res); +// David portó LangChain a Dart en LangChain.dart +``` + +> Note: this is equivalent to +> `Runnable.mapInput, RunOutput>((input) => input[key])` + +## Runnable.getMapFromInput + +Sometimes the previous runnable returns a single item, but the next runnable expects a map. You can use `Runnable.getMapFromInput` to format the input for the next runnable. + +In the following example, we want our chain input type to be a String, but the prompt template expects a map. We can use `Runnable.getMapFromInput` to format the input for the prompt template. + +```dart +final model = ChatOpenAI(apiKey: openaiApiKey); +const outputParser = StringOutputParser(); + +final promptTemplate = ChatPromptTemplate.fromTemplates([ + ( + ChatMessageType.system, + 'Write out the following equation using algebraic symbols then solve it. ' + 'Use the format\n\nEQUATION:...\nSOLUTION:...\n\n', + ), + (ChatMessageType.human, '{equation_statement}'), +]); + +final chain = Runnable.getMapFromInput('equation_statement') + .pipe(promptTemplate) + .pipe(model) + .pipe(outputParser); + +final res = await chain.invoke('x raised to the third plus seven equals 12'); +print(res); +// EQUATION: \(x^3 + 7 = 12\) +// +// SOLUTION: +// Subtract 7 from both sides: +// \(x^3 = 5\) +// +// Take the cube root of both sides: +// \(x = \sqrt[3]{5}\) +``` + +> Note: this is equivalent to +> `Runnable.mapInput>((input) => {key: input})` diff --git a/docs/expression_language/primitives/passthrough.md b/docs/expression_language/primitives/passthrough.md new file mode 100644 index 00000000..55f3a6af --- /dev/null +++ b/docs/expression_language/primitives/passthrough.md @@ -0,0 +1,54 @@ +# Passthrough: Passing inputs through + +`RunnablePassthrough` on its own allows you to pass inputs unchanged. This typically is used in conjunction with `RunnableMap` to pass data through to a new key in the map. + +See the example below: + +```dart +final runnable = Runnable.fromMap>({ + 'passed': Runnable.passthrough(), + 'modified': Runnable.mapInput((input) => (input['num'] as int) + 1), +}); + +final res = await runnable.invoke({'num': 1}); +print(res); +// {passed: {num: 1}, modified: 2} +``` + +As seen above, `passed` key was called with `RunnablePassthrough` and so it simply passed on `{'num': 1}`. + +We also set a second key in the map with `modified`. This uses a map input to set a single value adding 1 to the num, which resulted in `modified` key with the value of 2. + +## Retrieval Example + +In the example below, we see a use case where we use `RunnablePassthrough` along with `RunnableMap`. + +```dart +final vectorStore = MemoryVectorStore( + embeddings: OpenAIEmbeddings(apiKey: openaiApiKey), +); +await vectorStore.addDocuments( + documents: [ + Document(pageContent: 'LangChain was created by Harrison'), + Document(pageContent: 'David ported LangChain to Dart in LangChain.dart'), + ], +); +final retriever = vectorStore.asRetriever(); +final promptTemplate = ChatPromptTemplate.fromTemplates([ + (ChatMessageType.system, 'Answer the question based on only the following context:\n{context}'), + (ChatMessageType.human, '{question}'), +]); +final model = ChatOpenAI(apiKey: openaiApiKey); +const outputParser = StringOutputParser(); + +final retrievalChain = Runnable.fromMap({ + 'context': retriever, + 'question': Runnable.passthrough(), +}).pipe(promptTemplate).pipe(model).pipe(outputParser); + +final res = await retrievalChain.invoke('Who created LangChain.dart?'); +print(res); +// David created LangChain.dart. +``` + +Here the input to prompt is expected to be a map with keys “context” and “question”. The user input is just the question. So we need to get the context using our retriever and passthrough the user input under the “question” key. In this case, the RunnablePassthrough allows us to pass on the user’s question to the prompt and model. diff --git a/docs/expression_language/cookbook/routing.md b/docs/expression_language/primitives/router.md similarity index 100% rename from docs/expression_language/cookbook/routing.md rename to docs/expression_language/primitives/router.md diff --git a/docs/expression_language/primitives/sequence.md b/docs/expression_language/primitives/sequence.md new file mode 100644 index 00000000..15728ce6 --- /dev/null +++ b/docs/expression_language/primitives/sequence.md @@ -0,0 +1,77 @@ +# RunnableSequence: Chaining runnables + +One key advantage of the `Runnable` interface is that any two runnables can be “chained” together into sequences. The output of the previous runnable’s `.invoke()` call is passed as input to the next runnable. This can be done using the `.pipe()` method (or the `|` operator, which is a convenient shorthand for `.pipe()`). The resulting `RunnableSequence` is itself a runnable, which means it can be invoked, streamed, or piped just like any other runnable. + +> Note: when using the `|` operator, the output type of the last runnable is always resolved to `Object` because of [Dart limitations](https://github.com/dart-lang/language/issues/1044). If you need to preserve the output type, use the `.pipe()` method instead. + +## The pipe operator + +To show off how this works, let’s go through an example. We’ll walk through a common pattern in LangChain: using a prompt template to format input into a chat model, and finally converting the chat message output into a string with an output parser. + +```dart +final promptTemplate = ChatPromptTemplate.fromTemplate( + 'Tell me a joke about {topic}', +); +final model = ChatOpenAI(apiKey: openaiApiKey); +const outputParser = StringOutputParser(); + +final chain = promptTemplate.pipe(model).pipe(outputParser); +``` + +Prompts and models are both runnable, and the output type from the prompt call is the same as the input type of the chat model, so we can chain them together. We can then invoke the resulting sequence like any other runnable: + +```dart +final res = await chain.invoke({'topic': 'bears'}); +print(res); +// Why don't bears wear socks? +// Because they have bear feet! +``` + +## Formatting inputs & output + +We can even combine this chain with more runnables to create another chain. This may involve some input/output formatting using other types of runnables, depending on the required inputs and outputs of the chain components. + +For example, let’s say we wanted to compose the joke generating chain with another chain that evaluates whether the generated joke was funny. + +We would need to be careful with how we format the input into the next chain. In the below example, we use a `RunnableMap` which runs all of its values concurrently and returns a map with the results which can then be passed to the prompt template. + +```dart +final analysisPrompt = ChatPromptTemplate.fromTemplate( + 'is this a funny joke? {joke}', +); +final composedChain = Runnable.fromMap({ + 'joke': chain, +}).pipe(analysisPrompt).pipe(model).pipe(outputParser); + +final res1 = await composedChain.invoke({'topic': 'bears'}); +print(res1); +// Some people may find this joke funny, especially if they enjoy puns or wordplay... +``` + +Instead of using `Runnable.fromMap`, we can use the convenience method `Runnable.getMapFromInput` which will automatically create a `RunnableMap` placing the input value into the map with the key specified. + +```dart +final composedChain2 = chain + .pipe(Runnable.getMapFromInput('joke')) + .pipe(analysisPrompt) + .pipe(model) + .pipe(outputParser); +``` + +Another option is to use `Runnable.mapInput` which allows to transform the input value using the provided function. + +```dart +final composedChain3 = chain + .pipe(Runnable.mapInput((joke) => {'joke': joke})) + .pipe(analysisPrompt) + .pipe(model) + .pipe(outputParser); +``` + +## Runnable.fromList + +You can also create a `RunnableSequence` from a list of runnables using `Runnable.fromList`. + +```dart +final chain = Runnable.fromList([promptTemplate, chatModel]); +``` diff --git a/docs/expression_language/streaming.md b/docs/expression_language/streaming.md new file mode 100644 index 00000000..8b4b720f --- /dev/null +++ b/docs/expression_language/streaming.md @@ -0,0 +1,259 @@ +# Streaming With LangChain + +Streaming is critical in making applications based on LLMs feel responsive to end-users. + +Important LangChain primitives like LLMs, parsers, prompts, retrievers, and agents implement the LangChain [Runnable Interface](/expression_language/interface.md). + +This guide will show you how to use `.stream()` to stream the final output of the chain. + +## Using Stream + +All `Runnable` objects implement a method called `stream`. + +These methods are designed to stream the final output in chunks, yielding each chunk as soon as it is available. + +Streaming is only possible if all steps in the program know how to process an **input stream**; i.e., process an input chunk one at a time, and yield a corresponding output chunk. + +The complexity of this processing can vary, from straightforward tasks like emitting tokens produced by an LLM, to more challenging ones like streaming parts of JSON results before the entire JSON is complete. + +The best place to start exploring streaming is with the single most important components in LLM apps – the models themselves! + +## LLMs and Chat Models + +Large language models and their chat variants are the primary bottleneck in LLM based apps. + +Large language models can take **several seconds** to generate a complete response to a query. This is far slower than the **~200-300 ms** threshold at which an application feels responsive to an end user. + +The key strategy to make the application feel more responsive is to show intermediate progress; e.g., to stream the output from the model token by token. + +```dart +final model = ChatOpenAI(apiKey: openAiApiKey); + +final stream = model.stream(PromptValue.string('Hello! Tell me about yourself.')); +final chunks = []; +await for (final chunk in stream) { + chunks.add(chunk); + stdout.write('${chunk.output.content}|'); +} +// Hello|!| I| am| a| language| model| AI| created| by| Open|AI|,|... +``` + +Let’s have a look at one of the raw chunks: + +```dart +print(chunks.first); +// ChatResult{ +// id: chatcmpl-9IHQvyTl9fyVmF7P6zamGaX1XAN6d, +// output: AIChatMessage{ +// content: Hello, +// }, +// finishReason: FinishReason.unspecified, +// metadata: { +// model: gpt-3.5-turbo-0125, +// created: 1714143945, +// system_fingerprint: fp_3b956da36b +// }, +// streaming: true +// } +``` + +We got back a `ChatResult` instance as usual, but containing only a part of the full response (`Hello`). + +We can identify results that are streamed by checking the `streaming` field. The result objects are additive by design – one can simply add them up using the `.concat()` method to get the state of the response so far! + +```dart +final result = chunks.sublist(0, 6).reduce((prev, next) => prev.concat(next)); +print(result); +// ChatResult{ +// id: chatcmpl-9IHQvyTl9fyVmF7P6zamGaX1XAN6d, +// output: AIChatMessage{ +// content: Hello! I am a language model +// }, +// finishReason: FinishReason.unspecified, +// metadata: { +// model: gpt-3.5-turbo-0125, +// created: 1714143945, +// system_fingerprint: fp_3b956da36b +// }, +// streaming: true +// } +``` + +## Chains + +Virtually all LLM applications involve more steps than just a call to a language model. + +Let’s build a simple chain using LangChain Expression Language (LCEL) that combines a prompt, model and a parser and verify that streaming works. + +We will use `StringOutputParser` to parse the output from the model. This is a simple parser that extracts the string output from the result returned by the model. + +> LCEL is a declarative way to specify a “program” by chaining together different LangChain primitives. Chains created using LCEL benefit from an automatic implementation of stream, allowing streaming of the final output. In fact, chains created with LCEL implement the entire standard Runnable interface. + +```dart +final model = ChatOpenAI(apiKey: openAiApiKey); +final prompt = ChatPromptTemplate.fromTemplate('Tell me a joke about {topic}'); +const parser = StringOutputParser(); + +final chain = prompt.pipe(model).pipe(parser); + +final stream = chain.stream({'topic': 'parrot'}); +await stream.forEach((final chunk) => stdout.write('$chunk|')); +// |Why| don|'t| you| ever| play| hide| and| seek| with| a| par|rot|?| +// |Because| they| always| squ|awk| when| they| find| you|!|| +``` + +You might notice above that parser actually doesn't block the streaming output from the model, and instead processes each chunk individually. Many of the LCEL primitives also support this kind of transform-style passthrough streaming, which can be very convenient when constructing apps. + +> You do not have to use the LangChain Expression Language to use LangChain and can instead rely on a standard imperative programming approach by calling invoke, batch or stream on each component individually, assigning the results to variables and then using them downstream as you see fit. +> +> If that works for your needs, then that’s fine by us 👌! + +## Working with Input Streams + +What if you wanted to stream JSON from the output as it was being generated? + +If you were to rely on `json.decode` to parse the partial json, the parsing would fail as the partial json wouldn't be valid json. + +You'd likely be at a complete loss of what to do and claim that it wasn't possible to stream JSON. + +Well, turns out there is a way to do it - the parser needs to operate on the input stream, and attempt to “auto-complete” the partial json into a valid state. + +Let’s see such a parser in action to understand what this means. + +```dart +final model = ChatOpenAI( + apiKey: openAiApiKey, + defaultOptions: const ChatOpenAIOptions( + responseFormat: ChatOpenAIResponseFormat( + type: ChatOpenAIResponseFormatType.jsonObject, + ), + ), +); +final parser = JsonOutputParser(); + +final chain = model.pipe(parser); + +final stream = chain.stream( + PromptValue.string( + 'Output a list of the countries france, spain and japan and their ' + 'populations in JSON format. Use a dict with an outer key of ' + '"countries" which contains a list of countries. ' + 'Each country should have the key "name" and "population"', + ), +); +await stream.forEach((final chunk) => print('$chunk|')); +// {}| +// {countries: []}| +// {countries: [{}]}| +// {countries: [{name: }]}| +// {countries: [{name: France}]}| +// {countries: [{name: France, population: 670}]}| +// {countries: [{name: France, population: 670760}]}| +// {countries: [{name: France, population: 67076000}]}| +// {countries: [{name: France, population: 67076000}, {}]}| +// {countries: [{name: France, population: 67076000}, {name: }]}| +// {countries: [{name: France, population: 67076000}, {name: Spain}]}| +// {countries: [{name: France, population: 67076000}, {name: Spain, population: 467}]}| +// {countries: [{name: France, population: 67076000}, {name: Spain, population: 467237}]}| +// {countries: [{name: France, population: 67076000}, {name: Spain, population: 46723749}]}| +// {countries: [{name: France, population: 67076000}, {name: Spain, population: 46723749}, {}]}| +// {countries: [{name: France, population: 67076000}, {name: Spain, population: 46723749}, {name: }]}| +// {countries: [{name: France, population: 67076000}, {name: Spain, population: 46723749}, {name: Japan}]}| +// {countries: [{name: France, population: 67076000}, {name: Spain, population: 46723749}, {name: Japan, population: 126}]}| +// {countries: [{name: France, population: 67076000}, {name: Spain, population: 46723749}, {name: Japan, population: 126476}]}| +// {countries: [{name: France, population: 67076000}, {name: Spain, population: 46723749}, {name: Japan, population: 126476461}]}| +``` + +### Transforming Streams + +Now, instead of returning the complete JSON object, we want to extract the country names from the JSON as they are being generated. We can use `Runnable.mapInputStream` to transform the stream. + +```dart +final mapper = Runnable.mapInputStream((Stream> inputStream) { + return inputStream.map((input) { + final countries = (input['countries'] as List?)?.cast>() ?? []; + final countryNames = countries + .map((country) => country['name'] as String?) + .where((c) => c != null && c.isNotEmpty); + return countryNames.join(', '); + }).distinct(); +}); + +final chain = model.pipe(parser).pipe(mapper); + +final stream = chain.stream( + PromptValue.string( + 'Output a list of the countries france, spain and japan and their ' + 'populations in JSON format. Use a dict with an outer key of ' + '"countries" which contains a list of countries. ' + 'Each country should have the key "name" and "population"', + ), +); +await stream.forEach(print); +// France +// France, Spain +// France, Spain, Japan +``` + +## Non-streaming components + +The following runnables cannot process individual input chunks and instead aggregate the streaming input from the previous step into a single value before processing it: +- `PromptTemplate` +- `ChatPromptTemplate` +- `LLM` +- `ChatModel` +- `Retriever` +- `Tool` +- `RunnableFunction` +- `RunnableRouter` + +Let see what happens when we try to stream them. 🤨 + +```dart +final openaiApiKey = Platform.environment['OPENAI_API_KEY']; + +final vectorStore = MemoryVectorStore( + embeddings: OpenAIEmbeddings(apiKey: openaiApiKey), +); +await vectorStore.addDocuments( + documents: const [ + Document(pageContent: 'LangChain was created by Harrison'), + Document( + pageContent: 'David ported LangChain to Dart in LangChain.dart', + ), + ], +); +final retriever = vectorStore.asRetriever(); + +await retriever.stream('Who created LangChain.dart?').forEach(print); +// [Document{pageContent: David ported LangChain to Dart in LangChain.dart}, +// Document{pageContent: LangChain was created by Harrison}] +``` + +Stream just yielded the final result from that component. + +This is OK 🥹! Not all components have to implement streaming – in some cases streaming is either unnecessary, difficult or just doesn’t make sense. + +An LCEL chain constructed using non-streaming components, will still be able to stream in a lot of cases, with streaming of partial output starting after the last non-streaming step in the chain. + +```dart +final promptTemplate = ChatPromptTemplate.fromTemplates(const [ + ( + ChatMessageType.system, + 'Answer the question based on only the following context:\n{context}', + ), + (ChatMessageType.human, '{question}'), +]); +final model = ChatOpenAI(apiKey: openaiApiKey); +const outputParser = StringOutputParser(); + +final retrievalChain = Runnable.fromMap({ + 'context': retriever, + 'question': Runnable.passthrough(), +}).pipe(promptTemplate).pipe(model).pipe(outputParser); + +await retrievalChain + .stream('Who created LangChain.dart?') + .forEach((chunk) => stdout.write('$chunk|')); +// |David| created| Lang|Chain|.dart|.|| +``` diff --git a/examples/docs_examples/bin/expression_language/cookbook/streaming.dart b/examples/docs_examples/bin/expression_language/cookbook/streaming.dart new file mode 100644 index 00000000..7af0bb43 --- /dev/null +++ b/examples/docs_examples/bin/expression_language/cookbook/streaming.dart @@ -0,0 +1,201 @@ +// ignore_for_file: avoid_print +import 'dart:io'; + +import 'package:langchain/langchain.dart'; +import 'package:langchain_openai/langchain_openai.dart'; + +void main(final List arguments) async { + await _languageModels(); + await _chains(); + await _inputStreams(); + await _inputStreamMapper(); + await _nonStreamingComponents(); +} + +Future _languageModels() async { + final openAiApiKey = Platform.environment['OPENAI_API_KEY']; + final model = ChatOpenAI(apiKey: openAiApiKey); + + final stream = + model.stream(PromptValue.string('Hello! Tell me about yourself.')); + final chunks = []; + await for (final chunk in stream) { + chunks.add(chunk); + stdout.write('${chunk.output.content}|'); + } + // Hello|!| I| am| a| language| model| AI| created| by| Open|AI|,|... + + print(chunks.first); + // ChatResult{ + // id: chatcmpl-9IHQvyTl9fyVmF7P6zamGaX1XAN6d, + // output: AIChatMessage{ + // content: Hello, + // }, + // finishReason: FinishReason.unspecified, + // metadata: { + // model: gpt-3.5-turbo-0125, + // created: 1714143945, + // system_fingerprint: fp_3b956da36b + // }, + // streaming: true + // } + + final result = chunks.sublist(0, 6).reduce((prev, next) => prev.concat(next)); + print(result); + // ChatResult{ + // id: chatcmpl-9IHQvyTl9fyVmF7P6zamGaX1XAN6d, + // output: AIChatMessage{ + // content: Hello! I am a language model + // }, + // finishReason: FinishReason.unspecified, + // metadata: { + // model: gpt-3.5-turbo-0125, + // created: 1714143945, + // system_fingerprint: fp_3b956da36b + // }, + // streaming: true + // } +} + +Future _chains() async { + final openAiApiKey = Platform.environment['OPENAI_API_KEY']; + + final model = ChatOpenAI(apiKey: openAiApiKey); + final prompt = + ChatPromptTemplate.fromTemplate('Tell me a joke about {topic}'); + const parser = StringOutputParser(); + + final chain = prompt.pipe(model).pipe(parser); + + final stream = chain.stream({'topic': 'parrot'}); + await stream.forEach((final chunk) => stdout.write('$chunk|')); + // |Why| don|'t| you| ever| play| hide| and| seek| with| a| par|rot|?| + // |Because| they| always| squ|awk| when| they| find| you|!|| +} + +Future _inputStreams() async { + final openAiApiKey = Platform.environment['OPENAI_API_KEY']; + + final model = ChatOpenAI( + apiKey: openAiApiKey, + defaultOptions: const ChatOpenAIOptions( + responseFormat: ChatOpenAIResponseFormat( + type: ChatOpenAIResponseFormatType.jsonObject, + ), + ), + ); + final parser = JsonOutputParser(); + + final chain = model.pipe(parser); + + final stream = chain.stream( + PromptValue.string( + 'Output a list of the countries france, spain and japan and their ' + 'populations in JSON format. Use a dict with an outer key of ' + '"countries" which contains a list of countries. ' + 'Each country should have the key "name" and "population"', + ), + ); + await stream.forEach((final chunk) => print('$chunk|')); + // {}| + // {countries: []}| + // {countries: [{}]}| + // {countries: [{name: }]}| + // {countries: [{name: France}]}| + // {countries: [{name: France, population: 670}]}| + // {countries: [{name: France, population: 670760}]}| + // {countries: [{name: France, population: 67076000}]}| + // {countries: [{name: France, population: 67076000}, {}]}| + // {countries: [{name: France, population: 67076000}, {name: }]}| + // {countries: [{name: France, population: 67076000}, {name: Spain}]}| + // {countries: [{name: France, population: 67076000}, {name: Spain, population: 467}]}| + // {countries: [{name: France, population: 67076000}, {name: Spain, population: 467237}]}| + // {countries: [{name: France, population: 67076000}, {name: Spain, population: 46723749}]}| + // {countries: [{name: France, population: 67076000}, {name: Spain, population: 46723749}, {}]}| + // {countries: [{name: France, population: 67076000}, {name: Spain, population: 46723749}, {name: }]}| + // {countries: [{name: France, population: 67076000}, {name: Spain, population: 46723749}, {name: Japan}]}| + // {countries: [{name: France, population: 67076000}, {name: Spain, population: 46723749}, {name: Japan, population: 126}]}| + // {countries: [{name: France, population: 67076000}, {name: Spain, population: 46723749}, {name: Japan, population: 126476}]}| + // {countries: [{name: France, population: 67076000}, {name: Spain, population: 46723749}, {name: Japan, population: 126476461}]}| +} + +Future _inputStreamMapper() async { + final openAiApiKey = Platform.environment['OPENAI_API_KEY']; + + final model = ChatOpenAI( + apiKey: openAiApiKey, + defaultOptions: const ChatOpenAIOptions( + responseFormat: ChatOpenAIResponseFormat( + type: ChatOpenAIResponseFormatType.jsonObject, + ), + ), + ); + final parser = JsonOutputParser(); + final mapper = + Runnable.mapInputStream((Stream> inputStream) { + return inputStream.map((input) { + final countries = + (input['countries'] as List?)?.cast>() ?? []; + final countryNames = countries + .map((country) => country['name'] as String?) + .where((c) => c != null && c.isNotEmpty); + return countryNames.join(', '); + }).distinct(); + }); + + final chain = model.pipe(parser).pipe(mapper); + + final stream = chain.stream( + PromptValue.string( + 'Output a list of the countries france, spain and japan and their ' + 'populations in JSON format. Use a dict with an outer key of ' + '"countries" which contains a list of countries. ' + 'Each country should have the key "name" and "population"', + ), + ); + await stream.forEach(print); + // France + // France, Spain + // France, Spain, Japan +} + +Future _nonStreamingComponents() async { + final openaiApiKey = Platform.environment['OPENAI_API_KEY']; + + final vectorStore = MemoryVectorStore( + embeddings: OpenAIEmbeddings(apiKey: openaiApiKey), + ); + await vectorStore.addDocuments( + documents: const [ + Document(pageContent: 'LangChain was created by Harrison'), + Document( + pageContent: 'David ported LangChain to Dart in LangChain.dart', + ), + ], + ); + final retriever = vectorStore.asRetriever(); + + await retriever.stream('Who created LangChain.dart?').forEach(print); + // [Document{pageContent: David ported LangChain to Dart in LangChain.dart}, + // Document{pageContent: LangChain was created by Harrison}] + + final promptTemplate = ChatPromptTemplate.fromTemplates(const [ + ( + ChatMessageType.system, + 'Answer the question based on only the following context:\n{context}', + ), + (ChatMessageType.human, '{question}'), + ]); + final model = ChatOpenAI(apiKey: openaiApiKey); + const outputParser = StringOutputParser(); + + final retrievalChain = Runnable.fromMap({ + 'context': retriever, + 'question': Runnable.passthrough(), + }).pipe(promptTemplate).pipe(model).pipe(outputParser); + + await retrievalChain + .stream('Who created LangChain.dart?') + .forEach((chunk) => stdout.write('$chunk|')); + // |David| created| Lang|Chain|.dart|.|| +} diff --git a/examples/docs_examples/bin/expression_language/get_started.dart b/examples/docs_examples/bin/expression_language/get_started.dart index e145fc31..5ccc2505 100644 --- a/examples/docs_examples/bin/expression_language/get_started.dart +++ b/examples/docs_examples/bin/expression_language/get_started.dart @@ -62,6 +62,39 @@ Future _promptModelOutputParser() async { print(parsed); // Why did the ice cream go to therapy? // Because it had a rocky road! + + final input = {'topic': 'ice cream'}; + + final res1 = await promptTemplate.invoke(input); + print(res1.toChatMessages()); + // [HumanChatMessage{ + // content: ChatMessageContentText{ + // text: Tell me a joke about ice cream, + // }, + // }] + + final res2 = await promptTemplate.pipe(model).invoke(input); + print(res2); + // ChatResult{ + // id: chatcmpl-9J37Tnjm1dGUXqXBF98k7jfexATZW, + // output: AIChatMessage{ + // content: Why did the ice cream cone go to therapy? Because it had too many sprinkles of emotional issues!, + // }, + // finishReason: FinishReason.stop, + // metadata: { + // model: gpt-3.5-turbo-0125, + // created: 1714327251, + // system_fingerprint: fp_3b956da36b + // }, + // usage: LanguageModelUsage{ + // promptTokens: 14, + // promptBillableCharacters: null, + // responseTokens: 21, + // responseBillableCharacters: null, + // totalTokens: 35 + // }, + // streaming: false + // } } Future _ragSearch() async { @@ -72,15 +105,24 @@ Future _ragSearch() async { embeddings: OpenAIEmbeddings(apiKey: openaiApiKey), ); await vectorStore.addDocuments( - documents: [ - const Document(pageContent: 'LangChain was created by Harrison'), - const Document( + documents: const [ + Document(pageContent: 'LangChain was created by Harrison'), + Document( pageContent: 'David ported LangChain to Dart in LangChain.dart', ), ], ); - // 2. Construct a RAG prompt template + // 2. Define the retrieval chain + final retriever = vectorStore.asRetriever(); + final setupAndRetrieval = Runnable.fromMap({ + 'context': retriever.pipe( + Runnable.mapInput((docs) => docs.map((d) => d.pageContent).join('\n')), + ), + 'question': Runnable.passthrough(), + }); + + // 3. Construct a RAG prompt template final promptTemplate = ChatPromptTemplate.fromTemplates(const [ ( ChatMessageType.system, @@ -89,20 +131,11 @@ Future _ragSearch() async { (ChatMessageType.human, '{question}'), ]); - // 3. Create a Runnable that combines the retrieved documents into a single string - final docCombiner = - Runnable.fromFunction, String>((final docs, final _) { - return docs.map((final d) => d.pageContent).join('\n'); - }); - - // 4. Define the RAG pipeline - final chain = Runnable.fromMap({ - 'context': vectorStore.asRetriever().pipe(docCombiner), - 'question': Runnable.passthrough(), - }) - .pipe(promptTemplate) - .pipe(ChatOpenAI(apiKey: openaiApiKey)) - .pipe(const StringOutputParser()); + // 4. Define the final chain + final model = ChatOpenAI(apiKey: openaiApiKey); + const outputParser = StringOutputParser(); + final chain = + setupAndRetrieval.pipe(promptTemplate).pipe(model).pipe(outputParser); // 5. Run the pipeline final res = await chain.invoke('Who created LangChain.dart?'); diff --git a/examples/docs_examples/bin/expression_language/interface.dart b/examples/docs_examples/bin/expression_language/interface.dart index 529c10cf..f678f18a 100644 --- a/examples/docs_examples/bin/expression_language/interface.dart +++ b/examples/docs_examples/bin/expression_language/interface.dart @@ -2,26 +2,13 @@ import 'dart:io'; import 'package:langchain/langchain.dart'; -import 'package:langchain_chroma/langchain_chroma.dart'; -import 'package:langchain_community/langchain_community.dart'; import 'package:langchain_openai/langchain_openai.dart'; void main(final List arguments) async { - // Runnable interface await _runnableInterfaceInvoke(); await _runnableInterfaceStream(); await _runnableInterfaceBatch(); await _runnableInterfaceBatchOptions(); - - // Runnable types - await _runnableTypesRunnableSequence(); - await _runnableTypesRunnableMap(); - await _runnableTypesRunnableBinding(); - await _runnableTypesRunnableFunction(); - await _runnableTypesRunnablePassthrough(); - await _runnableTypesRunnableItemFromMap(); - await _runnableTypesRunnableMapFromInput(); - await _runnableTypesRunnableMapInput(); } Future _runnableInterfaceInvoke() async { @@ -32,7 +19,7 @@ Future _runnableInterfaceInvoke() async { 'Tell me a joke about {topic}', ); - final chain = promptTemplate | model | const StringOutputParser(); + final chain = promptTemplate.pipe(model).pipe(const StringOutputParser()); final res = await chain.invoke({'topic': 'bears'}); print(res); @@ -47,7 +34,7 @@ Future _runnableInterfaceStream() async { 'Tell me a joke about {topic}', ); - final chain = promptTemplate | model | const StringOutputParser(); + final chain = promptTemplate.pipe(model).pipe(const StringOutputParser()); final stream = chain.stream({'topic': 'bears'}); @@ -117,236 +104,3 @@ Future _runnableInterfaceBatchOptions() async { //['Why did the bear break up with his girlfriend? Because he couldn't bear the relationship anymore!,', // 'Why don't cats play poker in the jungle? Because there's too many cheetahs!'] } - -Future _runnableTypesRunnableSequence() async { - final openaiApiKey = Platform.environment['OPENAI_API_KEY']; - final model = ChatOpenAI(apiKey: openaiApiKey); - - final promptTemplate = ChatPromptTemplate.fromTemplate( - 'Tell me a joke about {topic}', - ); - - // The following three chains are equivalent: - final chain1 = promptTemplate | model | const StringOutputParser(); - // final chain2 = promptTemplate.pipe(model).pipe(const StringOutputParser()); - // final chain3 = Runnable.fromList( - // [promptTemplate, model, StringOutputParser()], - // ); - - final res = await chain1.invoke({'topic': 'bears'}); - print(res); - // Why don't bears wear shoes?\n\nBecause they have bear feet! -} - -Future _runnableTypesRunnableMap() async { - final openaiApiKey = Platform.environment['OPENAI_API_KEY']; - final model = ChatOpenAI(apiKey: openaiApiKey); - - final promptTemplate1 = ChatPromptTemplate.fromTemplate( - 'What is the city {person} is from?', - ); - final promptTemplate2 = ChatPromptTemplate.fromTemplate( - 'How old is {person}?', - ); - final promptTemplate3 = ChatPromptTemplate.fromTemplate( - 'Is {city} a good city for a {age} years old person?', - ); - const stringOutputParser = StringOutputParser(); - - final chain = Runnable.fromMap({ - 'city': promptTemplate1 | model | stringOutputParser, - 'age': promptTemplate2 | model | stringOutputParser, - }) | - promptTemplate3 | - model | - stringOutputParser; - - final res = await chain.invoke({'person': 'Elon Musk'}); - print(res); - // It is subjective to determine whether Pretoria, South Africa, is a good - // city for a 50-year-old person as it depends on individual preferences and - // needs. -} - -Future _runnableTypesRunnableBinding() async { - final openaiApiKey = Platform.environment['OPENAI_API_KEY']; - final model = ChatOpenAI(apiKey: openaiApiKey); - - final promptTemplate = ChatPromptTemplate.fromTemplate( - 'Tell me a joke about {foo}', - ); - - final chain = promptTemplate | - model.bind(const ChatOpenAIOptions(stop: ['\n'])) | - const StringOutputParser(); - - final res = await chain.invoke({'foo': 'bears'}); - print(res); - // Why don't bears wear shoes? -} - -Future _runnableTypesRunnableFunction() async { - final openaiApiKey = Platform.environment['OPENAI_API_KEY']; - final model = ChatOpenAI(apiKey: openaiApiKey); - - final promptTemplate = ChatPromptTemplate.fromTemplate( - 'How much is {a} + {b}?', - ); - - final chain = Runnable.fromMap({ - 'a': Runnable.fromFunction(( - final Map input, - final options, - ) async { - final foo = input['foo'] ?? ''; - return '${foo.length}'; - }), - 'b': Runnable.fromFunction(( - final Map input, - final options, - ) async { - final foo = input['foo'] ?? ''; - final bar = input['bar'] ?? ''; - return '${bar.length * foo.length}'; - }), - }) | - promptTemplate | - model | - const StringOutputParser(); - - final res = await chain.invoke({'foo': 'foo', 'bar': 'bar'}); - print(res); - // 3 + 9 = 12 -} - -Future _runnableTypesRunnablePassthrough() async { - final openaiApiKey = Platform.environment['OPENAI_API_KEY']; - final model = ChatOpenAI(apiKey: openaiApiKey); - - final promptTemplate = ChatPromptTemplate.fromTemplate( - 'Tell me a joke about {foo}', - ); - - final map = Runnable.fromMap({ - 'foo': Runnable.passthrough(), - }); - final chain = map | promptTemplate | model | const StringOutputParser(); - - final res = await chain.invoke('bears'); - print(res); - // Why don't bears wear shoes? Because they have bear feet! -} - -Future _runnableTypesRunnableItemFromMap() async { - final openaiApiKey = Platform.environment['OPENAI_API_KEY']; - final embeddings = OpenAIEmbeddings(apiKey: openaiApiKey); - final vectorStore = Chroma(embeddings: embeddings); - await vectorStore.addDocuments( - documents: const [ - Document(pageContent: 'Payment methods: iDEAL, PayPal and credit card'), - Document(pageContent: 'Free shipping: on orders over 30€'), - ], - ); - - final model = ChatOpenAI(apiKey: openaiApiKey); - final retriever = vectorStore.asRetriever(); - - final promptTemplate = ChatPromptTemplate.fromTemplate(''' -Answer the question based only on the following context: -{context} - -Question: {question} - -Answer in the following language: {language}'''); - - final chain = Runnable.fromMap({ - 'context': Runnable.getItemFromMap('question') | - (retriever | - Runnable.fromFunction( - (final docs, final _) => docs.join('\n'), - )), - 'question': Runnable.getItemFromMap('question'), - 'language': Runnable.getItemFromMap('language'), - }) | - promptTemplate | - model | - const StringOutputParser(); - - final res = await chain.invoke({ - 'question': 'What payment methods do you accept?', - 'language': 'es_ES', - }); - print(res); - // Aceptamos los siguientes métodos de pago: iDEAL, PayPal y tarjeta de - // crédito. -} - -Future _runnableTypesRunnableMapFromInput() async { - final openaiApiKey = Platform.environment['OPENAI_API_KEY']; - final model = ChatOpenAI(apiKey: openaiApiKey); - - final promptTemplate = ChatPromptTemplate.fromTemplate( - 'Tell me a joke about {foo}', - ); - - final chain = Runnable.getMapFromInput('foo') | - promptTemplate | - model | - const StringOutputParser(); - - final res = await chain.invoke('bears'); - print(res); - // Why don't bears wear shoes? Because they have bear feet! -} - -Future _runnableTypesRunnableMapInput() async { - final openaiApiKey = Platform.environment['OPENAI_API_KEY']; - - final prompt = ChatPromptTemplate.fromPromptMessages([ - SystemChatMessagePromptTemplate.fromTemplate( - 'You are a helpful assistant', - ), - HumanChatMessagePromptTemplate.fromTemplate('{input}'), - const MessagesPlaceholder(variableName: 'agent_scratchpad'), - ]); - - final tool = CalculatorTool(); - - final model = ChatOpenAI( - apiKey: openaiApiKey, - defaultOptions: const ChatOpenAIOptions(temperature: 0), - ).bind(ChatOpenAIOptions(functions: [tool.toChatFunction()])); - - const outputParser = OpenAIFunctionsAgentOutputParser(); - - List buildScratchpad(final List intermediateSteps) { - return intermediateSteps - .map((final s) { - return s.action.messageLog + - [ - ChatMessage.function( - name: s.action.tool, - content: s.observation, - ), - ]; - }) - .expand((final m) => m) - .toList(growable: false); - } - - final agent = Agent.fromRunnable( - Runnable.mapInput( - (final AgentPlanInput planInput) => { - 'input': planInput.inputs['input'], - 'agent_scratchpad': buildScratchpad(planInput.intermediateSteps), - }, - ).pipe(prompt).pipe(model).pipe(outputParser), - tools: [tool], - ); - final executor = AgentExecutor(agent: agent); - - final res = await executor.invoke({ - 'input': 'What is 40 raised to the 0.43 power?', - }); - print(res['output']); -} diff --git a/examples/docs_examples/bin/expression_language/primitives/binding.dart b/examples/docs_examples/bin/expression_language/primitives/binding.dart new file mode 100644 index 00000000..d19d4ec9 --- /dev/null +++ b/examples/docs_examples/bin/expression_language/primitives/binding.dart @@ -0,0 +1,115 @@ +// ignore_for_file: avoid_print +import 'dart:io'; + +import 'package:langchain/langchain.dart'; +import 'package:langchain_openai/langchain_openai.dart'; + +void main(final List arguments) async { + await _binding(); + await _differentModels(); + await _functionCalling(); +} + +Future _binding() async { + final openaiApiKey = Platform.environment['OPENAI_API_KEY']; + final model = ChatOpenAI(apiKey: openaiApiKey); + const outputParser = StringOutputParser(); + + final promptTemplate = ChatPromptTemplate.fromTemplates(const [ + ( + ChatMessageType.system, + 'Write out the following equation using algebraic symbols then solve it. ' + 'Use the format\n\nEQUATION:...\nSOLUTION:...\n\n', + ), + (ChatMessageType.human, '{equation_statement}'), + ]); + + final chain = Runnable.getMapFromInput('equation_statement') + .pipe(promptTemplate) + .pipe(model) + .pipe(outputParser); + + final res = await chain.invoke('x raised to the third plus seven equals 12'); + print(res); + // EQUATION: \(x^3 + 7 = 12\) + // + // SOLUTION: + // Subtract 7 from both sides: + // \(x^3 = 5\) + // + // Take the cube root of both sides: + // \(x = \sqrt[3]{5}\) + + final chain2 = Runnable.getMapFromInput('equation_statement') + .pipe(promptTemplate) + .pipe(model.bind(const ChatOpenAIOptions(stop: ['SOLUTION']))) + .pipe(outputParser); + final res2 = + await chain2.invoke('x raised to the third plus seven equals 12'); + print(res2); + // EQUATION: \( x^3 + 7 = 12 \) +} + +Future _differentModels() async { + final openaiApiKey = Platform.environment['OPENAI_API_KEY']; + final chatModel = ChatOpenAI( + apiKey: openaiApiKey, + ); + const outputParser = StringOutputParser(); + final prompt1 = PromptTemplate.fromTemplate('How are you {name}?'); + final prompt2 = PromptTemplate.fromTemplate('How old are you {name}?'); + final chain = Runnable.fromMap({ + 'q1': prompt1 | + chatModel.bind(const ChatOpenAIOptions(model: 'gpt-4-turbo')) | + outputParser, + 'q2': prompt2 | + chatModel.bind(const ChatOpenAIOptions(model: 'gpt-3.5-turbo')) | + outputParser, + }); + final res = await chain.invoke({'name': 'David'}); + print(res); + // {q1: Hello! I'm just a computer program, so I don't have feelings, + // q2: I am an AI digital assistant, so I do not have an age like humans do.} +} + +Future _functionCalling() async { + final openaiApiKey = Platform.environment['OPENAI_API_KEY']; + final model = ChatOpenAI(apiKey: openaiApiKey); + final outputParser = JsonOutputFunctionsParser(); + + final promptTemplate = ChatPromptTemplate.fromTemplates(const [ + ( + ChatMessageType.system, + 'Write out the following equation using algebraic symbols then solve it.' + ), + (ChatMessageType.human, '{equation_statement}'), + ]); + + const function = ChatFunction( + name: 'solver', + description: 'Formulates and solves an equation', + parameters: { + 'type': 'object', + 'properties': { + 'equation': { + 'type': 'string', + 'description': 'The algebraic expression of the equation', + }, + 'solution': { + 'type': 'string', + 'description': 'The solution to the equation', + }, + }, + 'required': ['equation', 'solution'], + }, + ); + + final chain = Runnable.getMapFromInput('equation_statement') + .pipe(promptTemplate) + .pipe(model.bind(const ChatOpenAIOptions(functions: [function]))) + .pipe(outputParser); + + final res = await chain.invoke('x raised to the third plus seven equals 12'); + print(res); + // {equation: x^3 + 7 = 12, solution: x = 1} +} diff --git a/examples/docs_examples/bin/expression_language/primitives/function.dart b/examples/docs_examples/bin/expression_language/primitives/function.dart new file mode 100644 index 00000000..8c631877 --- /dev/null +++ b/examples/docs_examples/bin/expression_language/primitives/function.dart @@ -0,0 +1,169 @@ +// ignore_for_file: avoid_print +import 'dart:io'; + +import 'package:langchain/langchain.dart'; +import 'package:langchain_openai/langchain_openai.dart'; + +void main(final List arguments) async { + await _function(); +} + +Future _function() async { + final openaiApiKey = Platform.environment['OPENAI_API_KEY']; + + Runnable logOutput(String stepName) { + return Runnable.fromFunction( + invoke: (input, options) { + print('Output from step "$stepName":\n$input\n---'); + return Future.value(input); + }, + stream: (inputStream, options) { + return inputStream.map((input) { + print('Chunk from step "$stepName":\n$input\n---'); + return input; + }); + }, + ); + } + + final promptTemplate = ChatPromptTemplate.fromTemplates(const [ + ( + ChatMessageType.system, + 'Write out the following equation using algebraic symbols then solve it. ' + 'Use the format:\nEQUATION:...\nSOLUTION:...\n', + ), + (ChatMessageType.human, '{equation_statement}'), + ]); + + final chain = Runnable.getMapFromInput('equation_statement') + .pipe(logOutput('getMapFromInput')) + .pipe(promptTemplate) + .pipe(logOutput('promptTemplate')) + .pipe(ChatOpenAI(apiKey: openaiApiKey)) + .pipe(logOutput('chatModel')) + .pipe(const StringOutputParser()) + .pipe(logOutput('outputParser')); + + // await chain.invoke('x raised to the third plus seven equals 12'); + // Output from step "getMapFromInput": + // {equation_statement: x raised to the third plus seven equals 12} + // --- + // Output from step "promptTemplate": + // System: Write out the following equation using algebraic symbols then solve it. Use the format + // + // EQUATION:... + // SOLUTION:... + // + // Human: x raised to the third plus seven equals 12 + // --- + // Output from step "chatModel": + // ChatResult{ + // id: chatcmpl-9JcVxKcryIhASLnpSRMXkOE1t1R9G, + // output: AIChatMessage{ + // content: + // EQUATION: \( x^3 + 7 = 12 \) + // SOLUTION: + // Subtract 7 from both sides of the equation: + // \( x^3 = 5 \) + // + // Take the cube root of both sides: + // \( x = \sqrt[3]{5} \) + // + // Therefore, the solution is \( x = \sqrt[3]{5} \), + // }, + // finishReason: FinishReason.stop, + // metadata: { + // model: gpt-3.5-turbo-0125, + // created: 1714463309, + // system_fingerprint: fp_3b956da36b + // }, + // usage: LanguageModelUsage{ + // promptTokens: 47, + // responseTokens: 76, + // totalTokens: 123 + // }, + // streaming: false + // } + // --- + // Output from step "outputParser": + // EQUATION: \( x^3 + 7 = 12 \) + // + // SOLUTION: + // Subtract 7 from both sides of the equation: + // \( x^3 = 5 \) + // + // Take the cube root of both sides: + // \( x = \sqrt[3]{5} \) + // + // Therefore, the solution is \( x = \sqrt[3]{5} \) + + chain.stream('x raised to the third plus seven equals 12').listen((_) {}); + // Chunk from step "getMapFromInput": + // {equation_statement: x raised to the third plus seven equals 12} + // --- + // Chunk from step "promptTemplate": + // System: Write out the following equation using algebraic symbols then solve it. Use the format: + // EQUATION:... + // SOLUTION:... + // + // Human: x raised to the third plus seven equals 12 + // --- + // Chunk from step "chatModel": + // ChatResult{ + // id: chatcmpl-9JcdKMy2yBlJhW2fxVu43Qn0gqofK, + // output: AIChatMessage{ + // content: E, + // }, + // finishReason: FinishReason.unspecified, + // metadata: { + // model: gpt-3.5-turbo-0125, + // created: 1714463766, + // system_fingerprint: fp_3b956da36b + // }, + // usage: LanguageModelUsage{}, + // streaming: true + // } + // --- + // Chunk from step "outputParser": + // E + // --- + // Chunk from step "chatModel": + // ChatResult{ + // id: chatcmpl-9JcdKMy2yBlJhW2fxVu43Qn0gqofK, + // output: AIChatMessage{ + // content: QU, + // }, + // finishReason: FinishReason.unspecified, + // metadata: { + // model: gpt-3.5-turbo-0125, + // created: 1714463766, + // system_fingerprint: fp_3b956da36b + // }, + // usage: LanguageModelUsage{}, + // streaming: true + // } + // --- + // Chunk from step "outputParser": + // QU + // --- + // Chunk from step "chatModel": + // ChatResult{ + // id: chatcmpl-9JcdKMy2yBlJhW2fxVu43Qn0gqofK, + // output: AIChatMessage{ + // content: ATION, + // }, + // finishReason: FinishReason.unspecified, + // metadata: { + // model: gpt-3.5-turbo-0125, + // created: 1714463766, + // system_fingerprint: fp_3b956da36b + // }, + // usage: LanguageModelUsage{}, + // streaming: true + // } + // --- + // Chunk from step "outputParser": + // ATION + // --- + // ... +} diff --git a/examples/docs_examples/bin/expression_language/primitives/map.dart b/examples/docs_examples/bin/expression_language/primitives/map.dart new file mode 100644 index 00000000..5f563aee --- /dev/null +++ b/examples/docs_examples/bin/expression_language/primitives/map.dart @@ -0,0 +1,112 @@ +// ignore_for_file: avoid_print +import 'dart:io'; + +import 'package:langchain/langchain.dart'; +import 'package:langchain_openai/langchain_openai.dart'; + +void main(final List arguments) async { + await _map(); + await _getItem(); + await _concurrency(); +} + +Future _map() async { + final openaiApiKey = Platform.environment['OPENAI_API_KEY']; + + final vectorStore = MemoryVectorStore( + embeddings: OpenAIEmbeddings(apiKey: openaiApiKey), + ); + await vectorStore.addDocuments( + documents: [ + const Document(pageContent: 'LangChain was created by Harrison'), + const Document( + pageContent: 'David ported LangChain to Dart in LangChain.dart', + ), + ], + ); + final retriever = vectorStore.asRetriever(); + final promptTemplate = ChatPromptTemplate.fromTemplates(const [ + ( + ChatMessageType.system, + 'Answer the question based on only the following context:\n{context}', + ), + (ChatMessageType.human, '{question}'), + ]); + final model = ChatOpenAI(apiKey: openaiApiKey); + const outputParser = StringOutputParser(); + + final retrievalChain = Runnable.fromMap({ + 'context': retriever, + 'question': Runnable.passthrough(), + }).pipe(promptTemplate).pipe(model).pipe(outputParser); + + final res = await retrievalChain.invoke('Who created LangChain.dart?'); + print(res); + // David created LangChain.dart. +} + +Future _getItem() async { + final openaiApiKey = Platform.environment['OPENAI_API_KEY']; + + final vectorStore = MemoryVectorStore( + embeddings: OpenAIEmbeddings(apiKey: openaiApiKey), + ); + await vectorStore.addDocuments( + documents: const [ + Document(pageContent: 'LangChain was created by Harrison'), + Document( + pageContent: 'David ported LangChain to Dart in LangChain.dart', + ), + ], + ); + final retriever = vectorStore.asRetriever(); + final promptTemplate = ChatPromptTemplate.fromTemplates(const [ + ( + ChatMessageType.system, + 'Answer the question based on only the following context:\n{context}\n' + 'Answer in the following language: {language}', + ), + (ChatMessageType.human, '{question}'), + ]); + final model = ChatOpenAI(apiKey: openaiApiKey); + const outputParser = StringOutputParser(); + + final retrievalChain = Runnable.fromMap>({ + 'context': Runnable.getItemFromMap('question').pipe(retriever), + 'question': Runnable.getItemFromMap('question'), + 'language': Runnable.getItemFromMap('language'), + }).pipe(promptTemplate).pipe(model).pipe(outputParser); + + final res = await retrievalChain.invoke({ + 'question': 'Who created LangChain.dart?', + 'language': 'Spanish', + }); + print(res); + // David portó LangChain a Dart en LangChain.dart +} + +Future _concurrency() async { + final openaiApiKey = Platform.environment['OPENAI_API_KEY']; + final model = ChatOpenAI(apiKey: openaiApiKey); + const outputParser = StringOutputParser(); + + final jokeChain = PromptTemplate.fromTemplate('tell me a joke about {topic}') + .pipe(model) + .pipe(outputParser); + final poemChain = + PromptTemplate.fromTemplate('write a 2-line poem about {topic}') + .pipe(model) + .pipe(outputParser); + + final mapChain = Runnable.fromMap>({ + 'joke': jokeChain, + 'poem': poemChain, + }); + + final res = await mapChain.invoke({ + 'topic': 'bear', + }); + print(res); + // {joke: Why did the bear bring a flashlight to the party? Because he wanted to be the "light" of the party!, + // poem: In the forest's hush, the bear prowls wide, A silent guardian, a force of nature's pride.} +} diff --git a/examples/docs_examples/bin/expression_language/primitives/mapper.dart b/examples/docs_examples/bin/expression_language/primitives/mapper.dart new file mode 100644 index 00000000..818ed0d7 --- /dev/null +++ b/examples/docs_examples/bin/expression_language/primitives/mapper.dart @@ -0,0 +1,160 @@ +// ignore_for_file: avoid_print +import 'dart:io'; + +import 'package:langchain/langchain.dart'; +import 'package:langchain_openai/langchain_openai.dart'; + +void main(final List arguments) async { + await _mapInput(); + await _mapInputStream(); + await _getItemFromMap(); + await _getMapFromInput(); +} + +Future _mapInput() async { + final openaiApiKey = Platform.environment['OPENAI_API_KEY']; + + // 1. Create a vector store and add documents to it + final vectorStore = MemoryVectorStore( + embeddings: OpenAIEmbeddings(apiKey: openaiApiKey), + ); + await vectorStore.addDocuments( + documents: const [ + Document(pageContent: 'LangChain was created by Harrison'), + Document( + pageContent: 'David ported LangChain to Dart in LangChain.dart', + ), + ], + ); + + // 2. Define the retrieval chain + final retriever = vectorStore.asRetriever(); + final setupAndRetrieval = Runnable.fromMap({ + 'context': retriever.pipe( + Runnable.mapInput((docs) => docs.map((d) => d.pageContent).join('\n')), + ), + 'question': Runnable.passthrough(), + }); + + // 3. Construct a RAG prompt template + final promptTemplate = ChatPromptTemplate.fromTemplates(const [ + ( + ChatMessageType.system, + 'Answer the question based on only the following context:\n{context}', + ), + (ChatMessageType.human, '{question}'), + ]); + + // 4. Define the final chain + final model = ChatOpenAI(apiKey: openaiApiKey); + const outputParser = StringOutputParser(); + final chain = + setupAndRetrieval.pipe(promptTemplate).pipe(model).pipe(outputParser); + + // 5. Run the pipeline + final res = await chain.invoke('Who created LangChain.dart?'); + print(res); + // David created LangChain.dart +} + +Future _mapInputStream() async { + final openAiApiKey = Platform.environment['OPENAI_API_KEY']; + + final model = ChatOpenAI( + apiKey: openAiApiKey, + defaultOptions: const ChatOpenAIOptions( + responseFormat: ChatOpenAIResponseFormat( + type: ChatOpenAIResponseFormatType.jsonObject, + ), + ), + ); + final parser = JsonOutputParser(); + final mapper = Runnable.mapInputStream( + (Stream> inputStream) async* { + yield await inputStream.last; + }); + + final chain = model.pipe(parser).pipe(mapper); + + final stream = chain.stream( + PromptValue.string( + 'Output a list of the countries france, spain and japan and their ' + 'populations in JSON format. Use a dict with an outer key of ' + '"countries" which contains a list of countries. ' + 'Each country should have the key "name" and "population"', + ), + ); + await stream.forEach((final chunk) => print('$chunk|')); + // {countries: [{name: France, population: 65273511}, {name: Spain, population: 46754778}, {name: Japan, population: 126476461}]}| +} + +Future _getItemFromMap() async { + final openaiApiKey = Platform.environment['OPENAI_API_KEY']; + + final vectorStore = MemoryVectorStore( + embeddings: OpenAIEmbeddings(apiKey: openaiApiKey), + ); + await vectorStore.addDocuments( + documents: const [ + Document(pageContent: 'LangChain was created by Harrison'), + Document( + pageContent: 'David ported LangChain to Dart in LangChain.dart', + ), + ], + ); + final retriever = vectorStore.asRetriever(); + final promptTemplate = ChatPromptTemplate.fromTemplates(const [ + ( + ChatMessageType.system, + 'Answer the question based on only the following context:\n{context}\n' + 'Answer in the following language: {language}', + ), + (ChatMessageType.human, '{question}'), + ]); + final model = ChatOpenAI(apiKey: openaiApiKey); + const outputParser = StringOutputParser(); + + final retrievalChain = Runnable.fromMap>({ + 'context': Runnable.getItemFromMap('question').pipe(retriever), + 'question': Runnable.getItemFromMap('question'), + 'language': Runnable.getItemFromMap('language'), + }).pipe(promptTemplate).pipe(model).pipe(outputParser); + + final res = await retrievalChain.invoke({ + 'question': 'Who created LangChain.dart?', + 'language': 'Spanish', + }); + print(res); + // David portó LangChain a Dart en LangChain.dart +} + +Future _getMapFromInput() async { + final openaiApiKey = Platform.environment['OPENAI_API_KEY']; + final model = ChatOpenAI(apiKey: openaiApiKey); + const outputParser = StringOutputParser(); + + final promptTemplate = ChatPromptTemplate.fromTemplates(const [ + ( + ChatMessageType.system, + 'Write out the following equation using algebraic symbols then solve it. ' + 'Use the format\n\nEQUATION:...\nSOLUTION:...\n\n', + ), + (ChatMessageType.human, '{equation_statement}'), + ]); + + final chain = Runnable.getMapFromInput('equation_statement') + .pipe(promptTemplate) + .pipe(model) + .pipe(outputParser); + + final res = await chain.invoke('x raised to the third plus seven equals 12'); + print(res); + // EQUATION: \(x^3 + 7 = 12\) + // + // SOLUTION: + // Subtract 7 from both sides: + // \(x^3 = 5\) + // + // Take the cube root of both sides: + // \(x = \sqrt[3]{5}\) +} diff --git a/examples/docs_examples/bin/expression_language/primitives/passthrough.dart b/examples/docs_examples/bin/expression_language/primitives/passthrough.dart new file mode 100644 index 00000000..aa858365 --- /dev/null +++ b/examples/docs_examples/bin/expression_language/primitives/passthrough.dart @@ -0,0 +1,56 @@ +// ignore_for_file: avoid_print +import 'dart:io'; + +import 'package:langchain/langchain.dart'; +import 'package:langchain_openai/langchain_openai.dart'; + +void main(final List arguments) async { + await _passthrough(); + await _retrieval(); +} + +Future _passthrough() async { + final runnable = Runnable.fromMap>({ + 'passed': Runnable.passthrough(), + 'modified': Runnable.mapInput((input) => (input['num'] as int) + 1), + }); + + final res = await runnable.invoke({'num': 1}); + print(res); + // {passed: {num: 1}, modified: 2} +} + +Future _retrieval() async { + final openaiApiKey = Platform.environment['OPENAI_API_KEY']; + + final vectorStore = MemoryVectorStore( + embeddings: OpenAIEmbeddings(apiKey: openaiApiKey), + ); + await vectorStore.addDocuments( + documents: const [ + Document(pageContent: 'LangChain was created by Harrison'), + Document( + pageContent: 'David ported LangChain to Dart in LangChain.dart', + ), + ], + ); + final retriever = vectorStore.asRetriever(); + final promptTemplate = ChatPromptTemplate.fromTemplates(const [ + ( + ChatMessageType.system, + 'Answer the question based on only the following context:\n{context}', + ), + (ChatMessageType.human, '{question}'), + ]); + final model = ChatOpenAI(apiKey: openaiApiKey); + const outputParser = StringOutputParser(); + + final retrievalChain = Runnable.fromMap({ + 'context': retriever, + 'question': Runnable.passthrough(), + }).pipe(promptTemplate).pipe(model).pipe(outputParser); + + final res = await retrievalChain.invoke('Who created LangChain.dart?'); + print(res); + // David created LangChain.dart. +} diff --git a/examples/docs_examples/bin/expression_language/primitives/sequence.dart b/examples/docs_examples/bin/expression_language/primitives/sequence.dart new file mode 100644 index 00000000..94ab1c84 --- /dev/null +++ b/examples/docs_examples/bin/expression_language/primitives/sequence.dart @@ -0,0 +1,53 @@ +// ignore_for_file: avoid_print +import 'dart:io'; + +import 'package:langchain/langchain.dart'; +import 'package:langchain_openai/langchain_openai.dart'; + +void main(final List arguments) async { + await _pipe(); +} + +Future _pipe() async { + final openaiApiKey = Platform.environment['OPENAI_API_KEY']; + + final promptTemplate = ChatPromptTemplate.fromTemplate( + 'Tell me a joke about {topic}', + ); + final model = ChatOpenAI(apiKey: openaiApiKey); + const outputParser = StringOutputParser(); + + final chain = promptTemplate.pipe(model).pipe(outputParser); + + final res = await chain.invoke({'topic': 'bears'}); + print(res); + // Why don't bears wear socks? + // Because they have bear feet! + + final analysisPrompt = ChatPromptTemplate.fromTemplate( + 'is this a funny joke? {joke}', + ); + + final composedChain = Runnable.fromMap({ + 'joke': chain, + }).pipe(analysisPrompt).pipe(model).pipe(outputParser); + final res1 = await composedChain.invoke({'topic': 'bears'}); + print(res1); + // Some people may find this joke funny, especially if they enjoy puns or wordplay... + + final composedChain2 = chain + .pipe(Runnable.getMapFromInput('joke')) + .pipe(analysisPrompt) + .pipe(model) + .pipe(outputParser); + final res2 = await composedChain2.invoke({'topic': 'bears'}); + print(res2); + + final composedChain3 = chain + .pipe(Runnable.mapInput((joke) => {'joke': joke})) + .pipe(analysisPrompt) + .pipe(model) + .pipe(outputParser); + final res3 = await composedChain3.invoke({'topic': 'bears'}); + print(res3); +} diff --git a/examples/docs_examples/bin/readme.dart b/examples/docs_examples/bin/readme.dart index 46b243da..cfe934f6 100644 --- a/examples/docs_examples/bin/readme.dart +++ b/examples/docs_examples/bin/readme.dart @@ -27,15 +27,24 @@ Future _rag() async { embeddings: OpenAIEmbeddings(apiKey: openaiApiKey), ); await vectorStore.addDocuments( - documents: [ - const Document(pageContent: 'LangChain was created by Harrison'), - const Document( + documents: const [ + Document(pageContent: 'LangChain was created by Harrison'), + Document( pageContent: 'David ported LangChain to Dart in LangChain.dart', ), ], ); - // 2. Construct a RAG prompt template + // 2. Define the retrieval chain + final retriever = vectorStore.asRetriever(); + final setupAndRetrieval = Runnable.fromMap({ + 'context': retriever.pipe( + Runnable.mapInput((docs) => docs.map((d) => d.pageContent).join('\n')), + ), + 'question': Runnable.passthrough(), + }); + + // 3. Construct a RAG prompt template final promptTemplate = ChatPromptTemplate.fromTemplates(const [ ( ChatMessageType.system, @@ -44,20 +53,11 @@ Future _rag() async { (ChatMessageType.human, '{question}'), ]); - // 3. Create a Runnable that combines the retrieved documents into a single string - final docCombiner = - Runnable.fromFunction, String>((final docs, final _) { - return docs.map((final d) => d.pageContent).join('\n'); - }); - - // 4. Define the RAG pipeline - final chain = Runnable.fromMap({ - 'context': vectorStore.asRetriever().pipe(docCombiner), - 'question': Runnable.passthrough(), - }) - .pipe(promptTemplate) - .pipe(ChatOpenAI(apiKey: openaiApiKey)) - .pipe(const StringOutputParser()); + // 4. Define the final chain + final model = ChatOpenAI(apiKey: openaiApiKey); + const outputParser = StringOutputParser(); + final chain = + setupAndRetrieval.pipe(promptTemplate).pipe(model).pipe(outputParser); // 5. Run the pipeline final res = await chain.invoke('Who created LangChain.dart?'); diff --git a/packages/langchain/README.md b/packages/langchain/README.md index 07d1ae9c..a5a92339 100644 --- a/packages/langchain/README.md +++ b/packages/langchain/README.md @@ -123,25 +123,28 @@ await vectorStore.addDocuments( ], ); -// 2. Construct a RAG prompt template +// 2. Define the retrieval chain +final retriever = vectorStore.asRetriever(); +final setupAndRetrieval = Runnable.fromMap({ + 'context': retriever.pipe( + Runnable.mapInput((docs) => docs.map((d) => d.pageContent).join('\n')), + ), + 'question': Runnable.passthrough(), +}); + +// 3. Construct a RAG prompt template final promptTemplate = ChatPromptTemplate.fromTemplates([ (ChatMessageType.system, 'Answer the question based on only the following context:\n{context}'), (ChatMessageType.human, '{question}'), ]); -// 3. Create a Runnable that combines the retrieved documents into a single string -final docCombiner = Runnable.fromFunction, String>((docs, _) { - return docs.map((d) => d.pageContent).join('\n'); -}); - -// 4. Define the RAG pipeline -final chain = Runnable.fromMap({ - 'context': vectorStore.asRetriever().pipe(docCombiner), - 'question': Runnable.passthrough(), -}) +// 4. Define the final chain +final model = ChatOpenAI(apiKey: openaiApiKey); +const outputParser = StringOutputParser(); +final chain = setupAndRetrieval .pipe(promptTemplate) - .pipe(ChatOpenAI(apiKey: openaiApiKey)) - .pipe(StringOutputParser()); + .pipe(model) + .pipe(outputParser); // 5. Run the pipeline final res = await chain.invoke('Who created LangChain.dart?');