diff --git a/docs/_sidebar.md b/docs/_sidebar.md index fea432e5..220e8138 100644 --- a/docs/_sidebar.md +++ b/docs/_sidebar.md @@ -3,6 +3,7 @@ - [Quickstart](/get_started/quickstart.md) - [Security](/get_started/security.md) - [LangChain Expression Language](/expression_language/expression_language.md) + - [Get started](/expression_language/get_started.md) - [Interface](/expression_language/interface.md) - Cookbook - [Prompt + LLM](/expression_language/cookbook/prompt_llm_parser.md) diff --git a/docs/expression_language/expression_language.md b/docs/expression_language/expression_language.md index e611b5c7..d6d9494d 100644 --- a/docs/expression_language/expression_language.md +++ b/docs/expression_language/expression_language.md @@ -1,6 +1,8 @@ # LangChain Expression Language (LCEL) -LangChain Expression Language or LCEL is a declarative way to easily compose chains together. Any chain constructed this way will automatically have full sync, async, and streaming support. +LangChain Expression Language, or LCEL, is a declarative way to easily compose chains together. LCEL was designed from day 1 to support putting prototypes in production, with no code changes, from the simplest “prompt + LLM” chain to the most complex chains (we’ve seen folks successfully run LCEL chains with 100s of steps in production). To highlight a few of the reasons you might want to use LCEL: -- [Interface](/expression_language/interface.md): The base interface shared by all LCEL objects. -- Cookbook: Examples of common LCEL usage patterns. +- **Streaming support:** When you build your chains with LCEL you get the best possible time-to-first-token (time elapsed until the first chunk of output comes out). For some chains this means eg. we stream tokens straight from an LLM to a streaming output parser, and you get back parsed, incremental chunks of output at the same rate as the LLM provider outputs the raw tokens. +- **Optimized parallel execution:** Whenever your LCEL chains have steps that can be executed in parallel (eg if you fetch documents from multiple retrievers) we automatically do it for the smallest possible latency. +- **Retries and fallbacks:** Configure retries and fallbacks for any part of your LCEL chain. This is a great way to make your chains more reliable at scale. +- **Access intermediate results:** For more complex chains it’s often very useful to access the results of intermediate steps even before the final output is produced. This can be used to let end-users know something is happening, or even just to debug your chain. diff --git a/docs/expression_language/get_started.md b/docs/expression_language/get_started.md new file mode 100644 index 00000000..0bf28ed0 --- /dev/null +++ b/docs/expression_language/get_started.md @@ -0,0 +1,159 @@ +# Get started + +LCEL makes it easy to build complex chains from basic components, and supports out of the box functionality such as streaming, parallelism, and logging. + +# Basic example: prompt + model + output parser + +The most basic and common use case is chaining a prompt template and a model together. To see how this works, let’s create a chain that takes a topic and generates a joke: + +```dart +final openaiApiKey = Platform.environment['OPENAI_API_KEY']; + +final promptTemplate = ChatPromptTemplate.fromTemplate( + 'Tell me a joke about {topic}', +); +final model = ChatOpenAI(apiKey: openaiApiKey); +const outputParser = StringOutputParser(); + +final chain = promptTemplate.pipe(model).pipe(outputParser); + +final res = await chain.invoke({'topic': 'ice cream'}); +print(res); +// Why did the ice cream truck break down? +// Because it had too many "scoops"! +``` + +Notice this line of this code, where we piece together then different components into a single chain using LCEL: + +```dart +final chain = promptTemplate.pipe(model).pipe(outputParser); +``` + +The `.pipe()` method (or `|` operator) is similar to a unix pipe operator, which chains together the different components feeds the output from one component as input into the next component. + +In this chain the user input is passed to the prompt template, then the prompt template output is passed to the model, then the model output is passed to the output parser. Let’s take a look at each component individually to really understand what’s going on. + +## 1. Prompt + +`promptTemplate` is a `BasePromptTemplate`, which means it takes in a map of template variables and produces a `PromptValue`. A `PromptValue` is a wrapper around a completed prompt that can be passed to either an `LLM` (which takes a string as input) or `ChatModel` (which takes a sequence of messages as input). It can work with either language model type because it defines logic both for producing `ChatMessage` and for producing a string. + +```dart +final promptValue = await promptTemplate.invoke({'topic': 'ice cream'}); + +final messages = promptValue.toChatMessages(); +print(messages); +// [HumanChatMessage{ +// content: ChatMessageContentText{ +// text: Tell me a joke about ice cream, +// }, +// }] + +final string = promptValue.toString(); +print(string); +// Human: Tell me a joke about ice cream +``` + +## 2. Model + +The `PromptValue` is then passed to `model`. In this case our `model` is a `ChatModel`, meaning it will output a `ChatMessage`. + +```dart +final chatOutput = await model.invoke(promptValue); +print(chatOutput.firstOutput); +// AIChatMessage{ +// content: Why did the ice cream truck break down? +// Because it couldn't make it over the rocky road!, +// } +``` + +If our model was an `LLM`, it would output a `String`. + +```dart +final llm = OpenAI(apiKey: openaiApiKey); +final llmOutput = await llm.invoke(promptValue); +print(llmOutput.firstOutput); +// Why did the ice cream go to therapy? +// Because it had a rocky road! +``` + +## 3. Output parser + +And lastly we pass our `model` output to the `outputParser`, which is a `BaseOutputParser` meaning it takes either a `String` or a `ChatMessage` as input. The `StringOutputParser` specifically simple converts any input into a `String`. + +```dart +final parsed = await outputParser.invoke(chatOutput); +print(parsed); +// Why did the ice cream go to therapy? +// Because it had a rocky road! +``` + +## 4. Entire Pipeline + +To follow the steps along: + +1. We pass in user input on the desired topic as `{'topic': 'ice cream'}` +2. The `promptTemplate` component takes the user input, which is then used to construct a `PromptValue` after using the `topic` to construct the prompt. +3. The `model` component takes the generated prompt, and passes into the OpenAI chat model for evaluation. The generated output from the model is a `ChatMessage` object (specifically an `AIChatMessage`). +4. Finally, the `outputParser` component takes in a `ChatMessage`, and transforms this into a `String`, which is returned from the invoke method. + +Note that if you’re curious about the output of any components, you can always test out a smaller version of the chain such as `promptTemplate` or `promptTemplate.pipe(model)` to see the intermediate results. + +## RAG Search Example + +For our next example, we want to run a retrieval-augmented generation chain to add some context when responding to questions. + +```dart +final openaiApiKey = Platform.environment['OPENAI_API_KEY']; + +// 1. Create a vector store and add documents to it +final vectorStore = MemoryVectorStore( + embeddings: OpenAIEmbeddings(apiKey: openaiApiKey), +); +await vectorStore.addDocuments( + documents: [ + Document(pageContent: 'LangChain was created by Harrison'), + Document(pageContent: 'David ported LangChain to Dart in LangChain.dart'), + ], +); + +// 2. Construct a RAG prompt template +final promptTemplate = ChatPromptTemplate.fromTemplates([ + (ChatMessageType.system, 'Answer the question based on only the following context:\n{context}'), + (ChatMessageType.human, '{question}'), +]); + +// 3. Create a Runnable that combines the retrieved documents into a single string +final docCombiner = Runnable.fromFunction, String>((docs, _) { + return docs.map((final d) => d.pageContent).join('\n'); +}); + +// 4. Define the RAG pipeline +final chain = Runnable.fromMap({ + 'context': vectorStore.asRetriever().pipe(docCombiner), + 'question': Runnable.passthrough(), +}) + .pipe(promptTemplate) + .pipe(ChatOpenAI(apiKey: openaiApiKey)) + .pipe(StringOutputParser()); + +// 5. Run the pipeline +final res = await chain.invoke('Who created LangChain.dart?'); +print(res); +// David created LangChain.dart +``` + +In this chain we add some extra logic around retrieving context from a vector store. + +We first instantiate our vector store and add some documents to it. Then we define our prompt, which takes in two input variables: + +- `context` -> this is a string which is returned from our vector store based on a semantic search from the input. +- `question` -> this is the question we want to ask. + +In our `chain`, we use a `RunnableMap` which is special type of runnable that takes an object of runnables and executes them all in parallel. It then returns an object with the same keys as the input object, but with the values replaced with the output of the runnables. + +In our case, it has two sub-chains to get the data required by our prompt: + +- `context` -> this is a `RunnableFunction` which takes the input from the `.invoke()` call, makes a request to our vector store, and returns the retrieved documents combined in a single String. +- `question` -> this uses a `RunnablePassthrough` which simply passes whatever the input was through to the next step, and in our case it returns it to the key in the object we defined. + +Finally, we chain together the prompt, model, and output parser as before. diff --git a/examples/docs_examples/bin/expression_language/get_started.dart b/examples/docs_examples/bin/expression_language/get_started.dart new file mode 100644 index 00000000..2ba32c80 --- /dev/null +++ b/examples/docs_examples/bin/expression_language/get_started.dart @@ -0,0 +1,111 @@ +// ignore_for_file: avoid_print +import 'dart:io'; + +import 'package:langchain/langchain.dart'; +import 'package:langchain_chroma/langchain_chroma.dart'; +import 'package:langchain_openai/langchain_openai.dart'; + +void main(final List arguments) async { + await _promptModelOutputParser(); + await _ragSearch(); +} + +Future _promptModelOutputParser() async { + final openaiApiKey = Platform.environment['OPENAI_API_KEY']; + + final promptTemplate = ChatPromptTemplate.fromTemplate( + 'Tell me a joke about {topic}', + ); + final model = ChatOpenAI(apiKey: openaiApiKey); + const outputParser = StringOutputParser(); + + final chain = promptTemplate.pipe(model).pipe(outputParser); + + final res = await chain.invoke({'topic': 'ice cream'}); + print(res); + // Why did the ice cream truck break down? + // Because it had too many "scoops"! + + // 1. Prompt + + final promptValue = await promptTemplate.invoke({'topic': 'ice cream'}); + + final messages = promptValue.toChatMessages(); + print(messages); + // [HumanChatMessage{ + // content: ChatMessageContentText{ + // text: Tell me a joke about ice cream, + // }, + // }] + + final string = promptValue.toString(); + print(string); + // Human: Tell me a joke about ice cream + + // 2. Model + + final chatOutput = await model.invoke(promptValue); + print(chatOutput.firstOutput); + // AIChatMessage{ + // content: Why did the ice cream truck break down? + // Because it couldn't make it over the rocky road!, + // } + + final llm = OpenAI(apiKey: openaiApiKey); + final llmOutput = await llm.invoke(promptValue); + print(llmOutput.firstOutput); + // Why did the ice cream go to therapy? + // Because it had a rocky road! + + // 3. Output parser + + final parsed = await outputParser.invoke(chatOutput); + print(parsed); + // Why did the ice cream go to therapy? + // Because it had a rocky road! +} + +Future _ragSearch() async { + final openaiApiKey = Platform.environment['OPENAI_API_KEY']; + + // 1. Create a vector store and add documents to it + final vectorStore = MemoryVectorStore( + embeddings: OpenAIEmbeddings(apiKey: openaiApiKey), + ); + await vectorStore.addDocuments( + documents: [ + const Document(pageContent: 'LangChain was created by Harrison'), + const Document( + pageContent: 'David ported LangChain to Dart in LangChain.dart'), + ], + ); + + // 2. Construct a RAG prompt template + final promptTemplate = ChatPromptTemplate.fromTemplates(const [ + ( + ChatMessageType.system, + 'Answer the question based on only the following context:\n{context}', + ), + (ChatMessageType.human, '{question}'), + ]); + + // 3. Create a Runnable that combines the retrieved documents into a single string + final docCombiner = + Runnable.fromFunction, String>((final docs, final _) { + return docs.map((final d) => d.pageContent).join('\n'); + }); + + // 4. Define the RAG pipeline + final chain = Runnable.fromMap({ + 'context': vectorStore.asRetriever().pipe(docCombiner), + 'question': Runnable.passthrough(), + }) + .pipe(promptTemplate) + .pipe(ChatOpenAI(apiKey: openaiApiKey)) + .pipe(const StringOutputParser()); + + // 5. Run the pipeline + final res = await chain.invoke('Who created LangChain.dart?'); + print(res); + // David created LangChain.dart +} diff --git a/examples/docs_examples/bin/readme.dart b/examples/docs_examples/bin/readme.dart index 9883cfaa..46b243da 100644 --- a/examples/docs_examples/bin/readme.dart +++ b/examples/docs_examples/bin/readme.dart @@ -2,46 +2,65 @@ import 'dart:io'; import 'package:langchain/langchain.dart'; +import 'package:langchain_google/langchain_google.dart'; import 'package:langchain_openai/langchain_openai.dart'; void main(final List arguments) async { await _callLLM(); - await _chains(); + await _rag(); } Future _callLLM() async { - final openAiApiKey = Platform.environment['OPENAI_API_KEY']; - final llm = OpenAI(apiKey: openAiApiKey); - final result = await llm('Hello world!'); + final googleApiKey = Platform.environment['GOOGLE_API_KEY']; + final model = ChatGoogleGenerativeAI(apiKey: googleApiKey); + final prompt = PromptValue.string('Hello world!'); + final result = await model.invoke(prompt); print(result); // Hello everyone! I'm new here and excited to be part of this community. } -Future _chains() async { +Future _rag() async { final openaiApiKey = Platform.environment['OPENAI_API_KEY']; - final promptTemplate1 = ChatPromptTemplate.fromTemplate( - 'What is the city {person} is from? Only respond with the name of the city.', + // 1. Create a vector store and add documents to it + final vectorStore = MemoryVectorStore( + embeddings: OpenAIEmbeddings(apiKey: openaiApiKey), ); - final promptTemplate2 = ChatPromptTemplate.fromTemplate( - 'What country is the city {city} in? Respond in {language}.', + await vectorStore.addDocuments( + documents: [ + const Document(pageContent: 'LangChain was created by Harrison'), + const Document( + pageContent: 'David ported LangChain to Dart in LangChain.dart', + ), + ], ); - final model = ChatOpenAI(apiKey: openaiApiKey); - const stringOutputParser = StringOutputParser(); + // 2. Construct a RAG prompt template + final promptTemplate = ChatPromptTemplate.fromTemplates(const [ + ( + ChatMessageType.system, + 'Answer the question based on only the following context:\n{context}', + ), + (ChatMessageType.human, '{question}'), + ]); - final chain = Runnable.fromMap({ - 'city': promptTemplate1 | model | stringOutputParser, - 'language': Runnable.getItemFromMap('language'), - }) | - promptTemplate2 | - model | - stringOutputParser; - - final res = await chain.invoke({ - 'person': 'Rafael Nadal', - 'language': 'Spanish', + // 3. Create a Runnable that combines the retrieved documents into a single string + final docCombiner = + Runnable.fromFunction, String>((final docs, final _) { + return docs.map((final d) => d.pageContent).join('\n'); }); + + // 4. Define the RAG pipeline + final chain = Runnable.fromMap({ + 'context': vectorStore.asRetriever().pipe(docCombiner), + 'question': Runnable.passthrough(), + }) + .pipe(promptTemplate) + .pipe(ChatOpenAI(apiKey: openaiApiKey)) + .pipe(const StringOutputParser()); + + // 5. Run the pipeline + final res = await chain.invoke('Who created LangChain.dart?'); print(res); - // La ciudad de Manacor se encuentra en España. + // David created LangChain.dart } diff --git a/packages/langchain/README.md b/packages/langchain/README.md index de1ac9d0..8c0f2828 100644 --- a/packages/langchain/README.md +++ b/packages/langchain/README.md @@ -10,53 +10,33 @@ Build powerful LLM-based Dart/Flutter applications. ## What is LangChain.dart? -> Check out the announcement post: [Introducing LangChain.dart 🦜️🔗](https://blog.langchaindart.com/introducing-langchain-dart-6b1d34fc41ef) +LangChain.dart is a Dart port of the popular [LangChain](https://github.com/hwchase17/langchain) Python framework created by [Harrison Chase](https://www.linkedin.com/in/harrison-chase-961287118). -LangChain.dart is a Dart port of the popular [LangChain](https://github.com/hwchase17/langchain) -Python framework created by [Harrison Chase](https://www.linkedin.com/in/harrison-chase-961287118). - -LangChain provides a set of ready-to-use components for working with language models and the -concept of chains, which allows to "chain" components together to formulate more advanced use cases -around LLMs. +LangChain provides a set of ready-to-use components for working with language models and the concept of chains, which allows to "chain" components together to formulate more advanced use cases around LLMs. The components can be grouped into a few core modules: ![LangChain.dart](https://raw.githubusercontent.com/davidmigloz/langchain_dart/main/docs/img/langchain.dart.png) -- 📃 **Model I/O:** streamlines the interaction between the model inputs (prompt templates), the - Language Model (abstracting different providers), and the model output (output parsers). -- 📚 **Retrieval:** assists in loading user data (document loaders), modifying it (document - transformers and embedding models), storing (vector stores), and retrieving when needed - (retrievers). -- 🔗 **Chains:** a way to compose multiple components or other chains into a single pipeline. -- 🧠 **Memory:** equips chains or agents with both short-term and long-term memory capabilities, - facilitating recall of prior interactions with the user. -- 🤖 **Agents:** "Bots" that harness LLMs to perform tasks. They serve as the link between LLM and the - tools (web search, calculators, database lookup, etc.). They determine what has to be - accomplished and the tools that are more suitable for the specific task. +- 📃 **Model I/O:** streamlines the interaction between the model inputs (prompt templates), the Language Model (abstracting different providers under a unified API), and the model output (output parsers). +- 📚 **Retrieval:** assists in loading user data (document loaders), transforming (document transformers and embedding models), storing (vector stores), and retrieving it when needed (retrievers). +- 🤖 **Agents:** "bots" that harness LLMs to perform tasks. They serve as the link between LLM and the tools (web search, calculators, database lookup, etc.). They determine what has to be accomplished and the tools that are more suitable for the specific task. + +The different components can be composed together using the LangChain Expression Language (LCEL). ## Motivation -Large Language Models (LLMs) have revolutionized Natural Language Processing (NLP), serving as -essential components in a wide range of applications, such as question-answering, summarization, -translation, and text generation. +Large Language Models (LLMs) have revolutionized Natural Language Processing (NLP), serving as essential components in a wide range of applications, such as question-answering, summarization, translation, and text generation. -The adoption of LLMs is creating a new tech stack in its wake. However, emerging libraries and -tools are predominantly being developed for the Python and JavaScript ecosystems. As a result, the -number of applications leveraging LLMs in these ecosystems has grown exponentially. +The adoption of LLMs is creating a new tech stack in its wake. However, emerging libraries and tools are predominantly being developed for the Python and JavaScript ecosystems. As a result, the number of applications leveraging LLMs in these ecosystems has grown exponentially. -In contrast, the Dart / Flutter ecosystem has not experienced similar growth, which can likely be -attributed to the scarcity of Dart and Flutter libraries that streamline the complexities -associated with working with LLMs. +In contrast, the Dart / Flutter ecosystem has not experienced similar growth, which can likely be attributed to the scarcity of Dart and Flutter libraries that streamline the complexities associated with working with LLMs. -LangChain.dart aims to fill this gap by abstracting the intricacies of working with LLMs in Dart -and Flutter, enabling developers to harness their combined potential effectively. +LangChain.dart aims to fill this gap by abstracting the intricacies of working with LLMs in Dart and Flutter, enabling developers to harness their combined potential effectively. ## Packages -LangChain.dart has a modular design where the core [langchain](https://pub.dev/packages/langchain) -package provides the LangChain API and each integration with a model provider, database, etc. is -provided by a separate package. +LangChain.dart has a modular design where the core [langchain](https://pub.dev/packages/langchain) package provides the LangChain API and each 3rd party integration with a model provider, database, etc. is provided by a separate package. | Package | Version | Description | |---------------------------------------------------------------------|------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------| @@ -82,8 +62,7 @@ Functionality provided by each package: | [langchain_chroma](https://pub.dev/packages/langchain_chroma) | | | | ✔ | | | | | [langchain_supabase](https://pub.dev/packages/langchain_supabase) | | | | ✔ | | | | -The following packages are maintained (and used internally) by LangChain.dart, -although they can also be used independently: +The following packages are maintained (and used internally) by LangChain.dart, although they can also be used independently: | Package | Version | Description | |-----------------------------------------------------------|---------------------------------------------------------------------------------------------------------------|-----------------------------------------------| @@ -96,77 +75,75 @@ although they can also be used independently: ## Getting started -To start using LangChain.dart, add `langchain` as a dependency to your `pubspec.yaml` file. -Also, include the dependencies for the specific integrations you want to use -(e.g.`langchain_openai`): +To start using LangChain.dart, add `langchain` as a dependency to your `pubspec.yaml` file. Also, include the dependencies for the specific integrations you want to use (e.g.`langchain_openai` or `langchain_google`): ```yaml dependencies: langchain: {version} langchain_openai: {version} + langchain_google: {version} + ... ``` -The most basic building block of LangChain.dart is calling an LLM on some prompt: +The most basic building block of LangChain.dart is calling an LLM on some prompt. LangChain.dart provides a unified interface for calling different LLMs. For example, we can use `ChatGoogleGenerativeAI` to call Google's Gemini model: ```dart -final llm = OpenAI(apiKey: openaiApiKey); +final model = ChatGoogleGenerativeAI(apiKey: googleApiKey); final prompt = PromptValue.string('Hello world!'); -final result = await openai.invoke(prompt); +final result = await model.invoke(prompt); // Hello everyone! I'm new here and excited to be part of this community. ``` -But you can build complex pipelines by chaining together multiple components. - -For example, the following pipeline does the following: -1. Asks the model where the given person is from. -2. Uses the answer to ask the model to return the country where the city is located in the given language. +But the power of LangChain.dart comes from chaining together multiple components to implement complex use cases. For example, a RAG (Retrieval-Augmented Generation) pipeline that would accept a user query, retrieve relevant documents from a vector store, format them using prompt templates, invoke the model, and parse the output: ```dart -final promptTemplate1 = ChatPromptTemplate.fromTemplate( - 'What is the city {person} is from? Only respond with the name of the city.', +// 1. Create a vector store and add documents to it +final vectorStore = MemoryVectorStore( + embeddings: OpenAIEmbeddings(apiKey: openaiApiKey), ); -final promptTemplate2 = ChatPromptTemplate.fromTemplate( - 'What country is the city {city} in? Respond in {language}.', +await vectorStore.addDocuments( + documents: [ + Document(pageContent: 'LangChain was created by Harrison'), + Document(pageContent: 'David ported LangChain to Dart in LangChain.dart'), + ], ); -final model = ChatOpenAI(apiKey: openaiApiKey); -const stringOutputParser = StringOutputParser(); +// 2. Construct a RAG prompt template +final promptTemplate = ChatPromptTemplate.fromTemplates([ + (ChatMessageType.system, 'Answer the question based on only the following context:\n{context}'), + (ChatMessageType.human, '{question}'), +]); -final chain = Runnable.fromMap({ - 'city': promptTemplate1 | model | stringOutputParser, - 'language': Runnable.getItemFromMap('language'), -}) | -promptTemplate2 | -model | -stringOutputParser; - -final res = await chain.invoke({ -'person': 'Rafael Nadal', -'language': 'Spanish', +// 3. Create a Runnable that combines the retrieved documents into a single string +final docCombiner = Runnable.fromFunction, String>((docs, _) { + return docs.map((final d) => d.pageContent).join('\n'); }); + +// 4. Define the RAG pipeline +final chain = Runnable.fromMap({ + 'context': vectorStore.asRetriever().pipe(docCombiner), + 'question': Runnable.passthrough(), +}) + .pipe(promptTemplate) + .pipe(ChatOpenAI(apiKey: openaiApiKey)) + .pipe(StringOutputParser()); + +// 5. Run the pipeline +final res = await chain.invoke('Who created LangChain.dart?'); print(res); -// La ciudad de Manacor se encuentra en España. +// David created LangChain.dart ``` -This is just a very simple example of a pipeline using -[LangChain Expression Language (LCEL)](https://langchaindart.com/#/expression_language/expression_language). -You can construct far more intricate pipelines by connecting various components, -such as a Retrieval-Augmented Generation (RAG) pipeline that would accept a user -query, retrieve relevant documents from a vector store, format them using -templates, prompt the model, and parse the output in a specific manner using an -output parser. - ## Documentation -- [LangChain conceptual guide](https://docs.langchain.com/docs) - [LangChain.dart documentation](https://langchaindart.com) - [Sample apps](https://github.com/davidmigloz/langchain_dart/tree/main/examples) - [LangChain.dart blog](https://blog.langchaindart.com) - [Project board](https://github.com/users/davidmigloz/projects/2/views/1) -## Support +## Community -Having trouble? Get help in the official [LangChain.dart Discord](https://discord.gg/x4qbhqecVR). +Stay up-to-date on the latest news and updates on the field, have great discussions, and get help in the official [LangChain.dart Discord](https://discord.gg/x4qbhqecVR). ## Contribute @@ -174,12 +151,9 @@ Having trouble? Get help in the official [LangChain.dart Discord](https://discor |-------------------------------------------------------------------------| | We are looking for collaborators to join the core group of maintainers. | -New contributors welcome! Check out our -[Contributors Guide](https://github.com/davidmigloz/langchain_dart/blob/main/CONTRIBUTING.md) for -help getting started. +New contributors welcome! Check out our [Contributors Guide](https://github.com/davidmigloz/langchain_dart/blob/main/CONTRIBUTING.md) for help getting started. -Join us on [Discord](https://discord.gg/x4qbhqecVR) to meet other maintainers. We'll help you get -your first contribution in no time! +Join us on [Discord](https://discord.gg/x4qbhqecVR) to meet other maintainers. We'll help you get your first contribution in no time! ## Related projects