Skip to content

Commit

Permalink
docs: Update LCEL docs
Browse files Browse the repository at this point in the history
  • Loading branch information
davidmigloz committed Feb 17, 2024
1 parent 89f7b0b commit ab3ab57
Show file tree
Hide file tree
Showing 6 changed files with 371 additions and 105 deletions.
1 change: 1 addition & 0 deletions docs/_sidebar.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
- [Quickstart](/get_started/quickstart.md)
- [Security](/get_started/security.md)
- [LangChain Expression Language](/expression_language/expression_language.md)
- [Get started](/expression_language/get_started.md)
- [Interface](/expression_language/interface.md)
- Cookbook
- [Prompt + LLM](/expression_language/cookbook/prompt_llm_parser.md)
Expand Down
8 changes: 5 additions & 3 deletions docs/expression_language/expression_language.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
# LangChain Expression Language (LCEL)

LangChain Expression Language or LCEL is a declarative way to easily compose chains together. Any chain constructed this way will automatically have full sync, async, and streaming support.
LangChain Expression Language, or LCEL, is a declarative way to easily compose chains together. LCEL was designed from day 1 to support putting prototypes in production, with no code changes, from the simplest “prompt + LLM” chain to the most complex chains (we’ve seen folks successfully run LCEL chains with 100s of steps in production). To highlight a few of the reasons you might want to use LCEL:

- [Interface](/expression_language/interface.md): The base interface shared by all LCEL objects.
- Cookbook: Examples of common LCEL usage patterns.
- **Streaming support:** When you build your chains with LCEL you get the best possible time-to-first-token (time elapsed until the first chunk of output comes out). For some chains this means eg. we stream tokens straight from an LLM to a streaming output parser, and you get back parsed, incremental chunks of output at the same rate as the LLM provider outputs the raw tokens.
- **Optimized parallel execution:** Whenever your LCEL chains have steps that can be executed in parallel (eg if you fetch documents from multiple retrievers) we automatically do it for the smallest possible latency.
- **Retries and fallbacks:** Configure retries and fallbacks for any part of your LCEL chain. This is a great way to make your chains more reliable at scale.
- **Access intermediate results:** For more complex chains it’s often very useful to access the results of intermediate steps even before the final output is produced. This can be used to let end-users know something is happening, or even just to debug your chain.
159 changes: 159 additions & 0 deletions docs/expression_language/get_started.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,159 @@
# Get started

LCEL makes it easy to build complex chains from basic components, and supports out of the box functionality such as streaming, parallelism, and logging.

# Basic example: prompt + model + output parser

The most basic and common use case is chaining a prompt template and a model together. To see how this works, let’s create a chain that takes a topic and generates a joke:

```dart
final openaiApiKey = Platform.environment['OPENAI_API_KEY'];
final promptTemplate = ChatPromptTemplate.fromTemplate(
'Tell me a joke about {topic}',
);
final model = ChatOpenAI(apiKey: openaiApiKey);
const outputParser = StringOutputParser<AIChatMessage>();
final chain = promptTemplate.pipe(model).pipe(outputParser);
final res = await chain.invoke({'topic': 'ice cream'});
print(res);
// Why did the ice cream truck break down?
// Because it had too many "scoops"!
```

Notice this line of this code, where we piece together then different components into a single chain using LCEL:

```dart
final chain = promptTemplate.pipe(model).pipe(outputParser);
```

The `.pipe()` method (or `|` operator) is similar to a unix pipe operator, which chains together the different components feeds the output from one component as input into the next component.

In this chain the user input is passed to the prompt template, then the prompt template output is passed to the model, then the model output is passed to the output parser. Let’s take a look at each component individually to really understand what’s going on.

## 1. Prompt

`promptTemplate` is a `BasePromptTemplate`, which means it takes in a map of template variables and produces a `PromptValue`. A `PromptValue` is a wrapper around a completed prompt that can be passed to either an `LLM` (which takes a string as input) or `ChatModel` (which takes a sequence of messages as input). It can work with either language model type because it defines logic both for producing `ChatMessage` and for producing a string.

```dart
final promptValue = await promptTemplate.invoke({'topic': 'ice cream'});
final messages = promptValue.toChatMessages();
print(messages);
// [HumanChatMessage{
// content: ChatMessageContentText{
// text: Tell me a joke about ice cream,
// },
// }]
final string = promptValue.toString();
print(string);
// Human: Tell me a joke about ice cream
```

## 2. Model

The `PromptValue` is then passed to `model`. In this case our `model` is a `ChatModel`, meaning it will output a `ChatMessage`.

```dart
final chatOutput = await model.invoke(promptValue);
print(chatOutput.firstOutput);
// AIChatMessage{
// content: Why did the ice cream truck break down?
// Because it couldn't make it over the rocky road!,
// }
```

If our model was an `LLM`, it would output a `String`.

```dart
final llm = OpenAI(apiKey: openaiApiKey);
final llmOutput = await llm.invoke(promptValue);
print(llmOutput.firstOutput);
// Why did the ice cream go to therapy?
// Because it had a rocky road!
```

## 3. Output parser

And lastly we pass our `model` output to the `outputParser`, which is a `BaseOutputParser` meaning it takes either a `String` or a `ChatMessage` as input. The `StringOutputParser` specifically simple converts any input into a `String`.

```dart
final parsed = await outputParser.invoke(chatOutput);
print(parsed);
// Why did the ice cream go to therapy?
// Because it had a rocky road!
```

## 4. Entire Pipeline

To follow the steps along:

1. We pass in user input on the desired topic as `{'topic': 'ice cream'}`
2. The `promptTemplate` component takes the user input, which is then used to construct a `PromptValue` after using the `topic` to construct the prompt.
3. The `model` component takes the generated prompt, and passes into the OpenAI chat model for evaluation. The generated output from the model is a `ChatMessage` object (specifically an `AIChatMessage`).
4. Finally, the `outputParser` component takes in a `ChatMessage`, and transforms this into a `String`, which is returned from the invoke method.

Note that if you’re curious about the output of any components, you can always test out a smaller version of the chain such as `promptTemplate` or `promptTemplate.pipe(model)` to see the intermediate results.

## RAG Search Example

For our next example, we want to run a retrieval-augmented generation chain to add some context when responding to questions.

```dart
final openaiApiKey = Platform.environment['OPENAI_API_KEY'];
// 1. Create a vector store and add documents to it
final vectorStore = MemoryVectorStore(
embeddings: OpenAIEmbeddings(apiKey: openaiApiKey),
);
await vectorStore.addDocuments(
documents: [
Document(pageContent: 'LangChain was created by Harrison'),
Document(pageContent: 'David ported LangChain to Dart in LangChain.dart'),
],
);
// 2. Construct a RAG prompt template
final promptTemplate = ChatPromptTemplate.fromTemplates([
(ChatMessageType.system, 'Answer the question based on only the following context:\n{context}'),
(ChatMessageType.human, '{question}'),
]);
// 3. Create a Runnable that combines the retrieved documents into a single string
final docCombiner = Runnable.fromFunction<List<Document>, String>((docs, _) {
return docs.map((final d) => d.pageContent).join('\n');
});
// 4. Define the RAG pipeline
final chain = Runnable.fromMap<String>({
'context': vectorStore.asRetriever().pipe(docCombiner),
'question': Runnable.passthrough(),
})
.pipe(promptTemplate)
.pipe(ChatOpenAI(apiKey: openaiApiKey))
.pipe(StringOutputParser());
// 5. Run the pipeline
final res = await chain.invoke('Who created LangChain.dart?');
print(res);
// David created LangChain.dart
```

In this chain we add some extra logic around retrieving context from a vector store.

We first instantiate our vector store and add some documents to it. Then we define our prompt, which takes in two input variables:

- `context` -> this is a string which is returned from our vector store based on a semantic search from the input.
- `question` -> this is the question we want to ask.

In our `chain`, we use a `RunnableMap` which is special type of runnable that takes an object of runnables and executes them all in parallel. It then returns an object with the same keys as the input object, but with the values replaced with the output of the runnables.

In our case, it has two sub-chains to get the data required by our prompt:

- `context` -> this is a `RunnableFunction` which takes the input from the `.invoke()` call, makes a request to our vector store, and returns the retrieved documents combined in a single String.
- `question` -> this uses a `RunnablePassthrough` which simply passes whatever the input was through to the next step, and in our case it returns it to the key in the object we defined.

Finally, we chain together the prompt, model, and output parser as before.
111 changes: 111 additions & 0 deletions examples/docs_examples/bin/expression_language/get_started.dart
Original file line number Diff line number Diff line change
@@ -0,0 +1,111 @@
// ignore_for_file: avoid_print
import 'dart:io';

import 'package:langchain/langchain.dart';
import 'package:langchain_chroma/langchain_chroma.dart';
import 'package:langchain_openai/langchain_openai.dart';

void main(final List<String> arguments) async {
await _promptModelOutputParser();
await _ragSearch();
}

Future<void> _promptModelOutputParser() async {
final openaiApiKey = Platform.environment['OPENAI_API_KEY'];

final promptTemplate = ChatPromptTemplate.fromTemplate(
'Tell me a joke about {topic}',
);
final model = ChatOpenAI(apiKey: openaiApiKey);
const outputParser = StringOutputParser<AIChatMessage>();

final chain = promptTemplate.pipe(model).pipe(outputParser);

final res = await chain.invoke({'topic': 'ice cream'});
print(res);
// Why did the ice cream truck break down?
// Because it had too many "scoops"!

// 1. Prompt

final promptValue = await promptTemplate.invoke({'topic': 'ice cream'});

final messages = promptValue.toChatMessages();
print(messages);
// [HumanChatMessage{
// content: ChatMessageContentText{
// text: Tell me a joke about ice cream,
// },
// }]

final string = promptValue.toString();
print(string);
// Human: Tell me a joke about ice cream

// 2. Model

final chatOutput = await model.invoke(promptValue);
print(chatOutput.firstOutput);
// AIChatMessage{
// content: Why did the ice cream truck break down?
// Because it couldn't make it over the rocky road!,
// }

final llm = OpenAI(apiKey: openaiApiKey);
final llmOutput = await llm.invoke(promptValue);
print(llmOutput.firstOutput);
// Why did the ice cream go to therapy?
// Because it had a rocky road!

// 3. Output parser

final parsed = await outputParser.invoke(chatOutput);
print(parsed);
// Why did the ice cream go to therapy?
// Because it had a rocky road!
}

Future<void> _ragSearch() async {
final openaiApiKey = Platform.environment['OPENAI_API_KEY'];

// 1. Create a vector store and add documents to it
final vectorStore = MemoryVectorStore(
embeddings: OpenAIEmbeddings(apiKey: openaiApiKey),
);
await vectorStore.addDocuments(
documents: [
const Document(pageContent: 'LangChain was created by Harrison'),
const Document(
pageContent: 'David ported LangChain to Dart in LangChain.dart'),
],
);

// 2. Construct a RAG prompt template
final promptTemplate = ChatPromptTemplate.fromTemplates(const [
(
ChatMessageType.system,
'Answer the question based on only the following context:\n{context}',
),
(ChatMessageType.human, '{question}'),
]);

// 3. Create a Runnable that combines the retrieved documents into a single string
final docCombiner =
Runnable.fromFunction<List<Document>, String>((final docs, final _) {
return docs.map((final d) => d.pageContent).join('\n');
});

// 4. Define the RAG pipeline
final chain = Runnable.fromMap<String>({
'context': vectorStore.asRetriever().pipe(docCombiner),
'question': Runnable.passthrough(),
})
.pipe(promptTemplate)
.pipe(ChatOpenAI(apiKey: openaiApiKey))
.pipe(const StringOutputParser());

// 5. Run the pipeline
final res = await chain.invoke('Who created LangChain.dart?');
print(res);
// David created LangChain.dart
}
65 changes: 42 additions & 23 deletions examples/docs_examples/bin/readme.dart
Original file line number Diff line number Diff line change
Expand Up @@ -2,46 +2,65 @@
import 'dart:io';

import 'package:langchain/langchain.dart';
import 'package:langchain_google/langchain_google.dart';
import 'package:langchain_openai/langchain_openai.dart';

void main(final List<String> arguments) async {
await _callLLM();
await _chains();
await _rag();
}

Future<void> _callLLM() async {
final openAiApiKey = Platform.environment['OPENAI_API_KEY'];
final llm = OpenAI(apiKey: openAiApiKey);
final result = await llm('Hello world!');
final googleApiKey = Platform.environment['GOOGLE_API_KEY'];
final model = ChatGoogleGenerativeAI(apiKey: googleApiKey);
final prompt = PromptValue.string('Hello world!');
final result = await model.invoke(prompt);
print(result);
// Hello everyone! I'm new here and excited to be part of this community.
}

Future<void> _chains() async {
Future<void> _rag() async {
final openaiApiKey = Platform.environment['OPENAI_API_KEY'];

final promptTemplate1 = ChatPromptTemplate.fromTemplate(
'What is the city {person} is from? Only respond with the name of the city.',
// 1. Create a vector store and add documents to it
final vectorStore = MemoryVectorStore(
embeddings: OpenAIEmbeddings(apiKey: openaiApiKey),
);
final promptTemplate2 = ChatPromptTemplate.fromTemplate(
'What country is the city {city} in? Respond in {language}.',
await vectorStore.addDocuments(
documents: [
const Document(pageContent: 'LangChain was created by Harrison'),
const Document(
pageContent: 'David ported LangChain to Dart in LangChain.dart',
),
],
);

final model = ChatOpenAI(apiKey: openaiApiKey);
const stringOutputParser = StringOutputParser();
// 2. Construct a RAG prompt template
final promptTemplate = ChatPromptTemplate.fromTemplates(const [
(
ChatMessageType.system,
'Answer the question based on only the following context:\n{context}',
),
(ChatMessageType.human, '{question}'),
]);

final chain = Runnable.fromMap({
'city': promptTemplate1 | model | stringOutputParser,
'language': Runnable.getItemFromMap('language'),
}) |
promptTemplate2 |
model |
stringOutputParser;

final res = await chain.invoke({
'person': 'Rafael Nadal',
'language': 'Spanish',
// 3. Create a Runnable that combines the retrieved documents into a single string
final docCombiner =
Runnable.fromFunction<List<Document>, String>((final docs, final _) {
return docs.map((final d) => d.pageContent).join('\n');
});

// 4. Define the RAG pipeline
final chain = Runnable.fromMap<String>({
'context': vectorStore.asRetriever().pipe(docCombiner),
'question': Runnable.passthrough(),
})
.pipe(promptTemplate)
.pipe(ChatOpenAI(apiKey: openaiApiKey))
.pipe(const StringOutputParser());

// 5. Run the pipeline
final res = await chain.invoke('Who created LangChain.dart?');
print(res);
// La ciudad de Manacor se encuentra en España.
// David created LangChain.dart
}

0 comments on commit ab3ab57

Please sign in to comment.