Skip to content

Commit

Permalink
feat: Add Fallback support for Runnables (#501)
Browse files Browse the repository at this point in the history
Co-authored-by: David Miguel <me@davidmiguel.com>
  • Loading branch information
Ganeshsivakumar and davidmigloz committed Jul 24, 2024
1 parent 5fed8db commit 5887858
Show file tree
Hide file tree
Showing 8 changed files with 589 additions and 11 deletions.
1 change: 1 addition & 0 deletions docs/_sidebar.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@
- [Binding: Configuring runnables](/expression_language/primitives/binding.md)
- [Router: Routing inputs](/expression_language/primitives/router.md)
- [Streaming](/expression_language/streaming.md)
- [Fallbacks](/expression_language/fallbacks.md)
- Cookbook
- [Prompt + LLM](/expression_language/cookbook/prompt_llm_parser.md)
- [Multiple chains](/expression_language/cookbook/multiple_chains.md)
Expand Down
135 changes: 135 additions & 0 deletions docs/expression_language/fallbacks.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,135 @@
# Fallbacks

When working with language models, you may often encounter issues from the underlying APIs, e.g. rate limits or downtime. Therefore, as you move your LLM applications into production it becomes more and more important to have contingencies for errors. That's why we've introduced the concept of fallbacks.

Crucially, fallbacks can be applied not only on the LLM level but on the whole runnable level. This is important because often times different models require different prompts. So if your call to OpenAI fails, you don't just want to send the same prompt to Anthropic - you probably want to use e.g. a different prompt template.

## Handling LLM API errors with fallbacks

This is maybe the most common use case for fallbacks. A request to an LLM API can fail for a variety of reasons - the API could be down, you could have hit a rate limit, or any number of things. This Situation can be handled using Fallbacks.

Fallbacks can be created using `withFallbacks()` function on the runnable that you are working on, for example `final runnablWithFallbacks = mainRunnable.withFallbacks([fallback1, fallback2])` this would create a `RunnableWithFallback` along with a list of fallbacks. When it is invoked, the `mainRunnable` would be called first, if it fails then fallbacks would be invoked sequentially until one of the fallback in list return output. If the `mainRunnable` succeeds and returns output then the fallbacks won't be called.

## Fallback for chat models

```dart
// fake model will throw error during invoke and fallback model will be called
final fakeOpenAIModel = ChatOpenAI(
defaultOptions: const ChatOpenAIOptions(model: 'tomato'),
);
final latestModel = ChatOpenAI(
defaultOptions: const ChatOpenAIOptions(model: 'gpt-4o'),
);
final modelWithFallbacks = fakeOpenAIModel.withFallbacks([latestModel]);
final prompt = PromptValue.string('Explain why sky is blue in 2 lines');
final res = await modelWithFallbacks.invoke(prompt);
print(res);
/*
{
"ChatResult": {
"id": "chatcmpl-9nKBcFNkzo5qUrdNB92b36J0d1meA",
"output": {
"AIChatMessage": {
"content": "The sky appears blue because molecules in the Earth's atmosphere scatter shorter wavelength blue light from the sun more effectively than longer wavelengths like red. This scattering process is known as Rayleigh scattering.",
"toolCalls": []
}
},
"finishReason": "FinishReason.stop",
"metadata": {
"model": "gpt-4o-2024-05-13",
"created": 1721542696,
"system_fingerprint": "fp_400f27fa1f"
},
"usage": {
"LanguageModelUsage": {
"promptTokens": 16,
"promptBillableCharacters": null,
"responseTokens": 36,
"responseBillableCharacters": null,
"totalTokens": 52
}
},
"streaming": false
}
}
*/
```

Note: if the options provided when invoking the runnable with fallbacks are not compatible with some of the fallbacks, they will be ignored. If you want to use different options for different fallbacks, provide them as `defaultOptions` when instantiating the fallbacks or use `bind()`.

## Fallbacks for RunnableSequences with batch

```dart
final fakeOpenAIModel = ChatOpenAI(
defaultOptions: const ChatOpenAIOptions(model: 'tomato'),
);
final latestModel = ChatOpenAI(
defaultOptions: const ChatOpenAIOptions(model: 'gpt-4o'),
);
final promptTemplate = ChatPromptTemplate.fromTemplate('tell me a joke about {topic}');
final badChain = promptTemplate.pipe(fakeOpenAIModel);
final goodChain = promptTemplate.pipe(latestModel);
final chainWithFallbacks = badChain.withFallbacks([goodChain]);
final res = await chainWithFallbacks.batch(
[
{'topic': 'bears'},
{'topic': 'cats'},
],
);
print(res);
/*
[
{
"id": "chatcmpl-9nKncT4IpAxbUxrEqEKGB0XUeyGRI",
"output": {
"content": "Sure! How about this one?\n\nWhy did the bear bring a suitcase to the forest?\n\nBecause it wanted to pack a lunch! 🐻🌲",
"toolCalls": []
},
"finishReason": "FinishReason.stop",
"metadata": {
"model": "gpt-4o-2024-05-13",
"created": 1721545052,
"system_fingerprint": "fp_400f27fa1f"
},
"usage": {
"promptTokens": 13,
"promptBillableCharacters": null,
"responseTokens": 31,
"responseBillableCharacters": null,
"totalTokens": 44
},
"streaming": false
},
{
"id": "chatcmpl-9nKnc58FpXFTPkzZfm2hHxJ5VSQQh",
"output": {
"content": "Sure, here's a cat joke for you:\n\nWhy was the cat sitting on the computer?\n\nBecause it wanted to keep an eye on the mouse!",
"toolCalls": []
},
"finishReason": "FinishReason.stop",
"metadata": {
"model": "gpt-4o-2024-05-13",
"created": 1721545052,
"system_fingerprint": "fp_c4e5b6fa31"
},
"usage": {
"promptTokens": 13,
"promptBillableCharacters": null,
"responseTokens": 29,
"responseBillableCharacters": null,
"totalTokens": 42
},
"streaming": false
}
]
*/
```
181 changes: 181 additions & 0 deletions examples/docs_examples/bin/expression_language/fallbacks.dart
Original file line number Diff line number Diff line change
@@ -0,0 +1,181 @@
// ignore_for_file: avoid_print
import 'dart:io';

import 'package:langchain/langchain.dart';
import 'package:langchain_openai/langchain_openai.dart';

void main() async {
await _modelWithFallbacks();
await _modelWithMultipleFallbacks();
await _chainWithFallbacks();
}

Future<void> _modelWithFallbacks() async {
final openaiApiKey = Platform.environment['OPENAI_API_KEY'];

final fakeOpenAIModel = ChatOpenAI(
defaultOptions: const ChatOpenAIOptions(model: 'tomato'),
);

final latestModel = ChatOpenAI(
apiKey: openaiApiKey,
defaultOptions: const ChatOpenAIOptions(model: 'gpt-4o'),
);

final modelWithFallbacks = fakeOpenAIModel.withFallbacks([latestModel]);

final prompt = PromptValue.string('Explain why sky is blue in 2 lines');

final res = await modelWithFallbacks.invoke(prompt);
print(res);
/*
{
"ChatResult": {
"id": "chatcmpl-9nKBcFNkzo5qUrdNB92b36J0d1meA",
"output": {
"AIChatMessage": {
"content": "The sky appears blue because molecules in the Earth's atmosphere scatter shorter wavelength blue light from the sun more effectively than longer wavelengths like red. This scattering process is known as Rayleigh scattering.",
"toolCalls": []
}
},
"finishReason": "FinishReason.stop",
"metadata": {
"model": "gpt-4o-2024-05-13",
"created": 1721542696,
"system_fingerprint": "fp_400f27fa1f"
},
"usage": {
"LanguageModelUsage": {
"promptTokens": 16,
"promptBillableCharacters": null,
"responseTokens": 36,
"responseBillableCharacters": null,
"totalTokens": 52
}
},
"streaming": false
}
}
*/
}

Future<void> _modelWithMultipleFallbacks() async {
final openaiApiKey = Platform.environment['OPENAI_API_KEY'];

final fakeOpenAIModel1 =
ChatOpenAI(defaultOptions: const ChatOpenAIOptions(model: 'tomato'));

final fakeOpenAIModel2 =
ChatOpenAI(defaultOptions: const ChatOpenAIOptions(model: 'potato'));

final latestModel = ChatOpenAI(
apiKey: openaiApiKey,
defaultOptions: const ChatOpenAIOptions(model: 'gpt-4o'),
);

final modelWithFallbacks =
fakeOpenAIModel1.withFallbacks([fakeOpenAIModel2, latestModel]);

final prompt = PromptValue.string('Explain why sky is blue in 2 lines');

final res = await modelWithFallbacks.invoke(prompt);
print(res);
/*
{
"id": "chatcmpl-9nLKW345nrh0nzmw18iO35XnoQ2jo",
"output": {
"content": "The sky appears blue due to Rayleigh scattering, where shorter blue wavelengths of sunlight are scattered more than other colors by the molecules in Earth's atmosphere. This scattering disperses blue light in all directions, making the sky look blue.",
"toolCalls": []
},
"finishReason": "FinishReason.stop",
"metadata": {
"model": "gpt-4o-2024-05-13",
"created": 1721547092,
"system_fingerprint": "fp_c4e5b6fa31"
},
"usage": {
"promptTokens": 16,
"promptBillableCharacters": null,
"responseTokens": 45,
"responseBillableCharacters": null,
"totalTokens": 61
},
"streaming": false
}
*/
}

Future<void> _chainWithFallbacks() async {
final openaiApiKey = Platform.environment['OPENAI_API_KEY'];

final fakeOpenAIModel = ChatOpenAI(
defaultOptions: const ChatOpenAIOptions(model: 'tomato'),
);

final latestModel = ChatOpenAI(
apiKey: openaiApiKey,
defaultOptions: const ChatOpenAIOptions(model: 'gpt-4o'),
);

final promptTemplate =
ChatPromptTemplate.fromTemplate('tell me a joke about {topic}');

final badChain = promptTemplate.pipe(fakeOpenAIModel);
final goodChain = promptTemplate.pipe(latestModel);

final chainWithFallbacks = badChain.withFallbacks([goodChain]);

final res = await chainWithFallbacks.batch(
[
{'topic': 'bears'},
{'topic': 'cats'},
],
);
print(res);
/*
[
{
"id": "chatcmpl-9nKncT4IpAxbUxrEqEKGB0XUeyGRI",
"output": {
"content": "Sure! How about this one?\n\nWhy did the bear bring a suitcase to the forest?\n\nBecause it wanted to pack a lunch! 🐻🌲",
"toolCalls": []
},
"finishReason": "FinishReason.stop",
"metadata": {
"model": "gpt-4o-2024-05-13",
"created": 1721545052,
"system_fingerprint": "fp_400f27fa1f"
},
"usage": {
"promptTokens": 13,
"promptBillableCharacters": null,
"responseTokens": 31,
"responseBillableCharacters": null,
"totalTokens": 44
},
"streaming": false
},
{
"id": "chatcmpl-9nKnc58FpXFTPkzZfm2hHxJ5VSQQh",
"output": {
"content": "Sure, here's a cat joke for you:\n\nWhy was the cat sitting on the computer?\n\nBecause it wanted to keep an eye on the mouse!",
"toolCalls": []
},
"finishReason": "FinishReason.stop",
"metadata": {
"model": "gpt-4o-2024-05-13",
"created": 1721545052,
"system_fingerprint": "fp_c4e5b6fa31"
},
"usage": {
"promptTokens": 13,
"promptBillableCharacters": null,
"responseTokens": 29,
"responseBillableCharacters": null,
"totalTokens": 42
},
"streaming": false
}
]
*/
}
Loading

0 comments on commit 5887858

Please sign in to comment.