Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat!: Support different logic for streaming in RunnableFunction #394

Merged
merged 1 commit into from
Apr 30, 2024

Conversation

davidmigloz
Copy link
Owner

@davidmigloz davidmigloz commented Apr 30, 2024

It is common to need to map the output value of a previous runnable to a new value that conforms to the input requirements of the next runnable. Runnable.mapInput, Runnable.mapInputStream, Runnable.getItemFromMap, and Runnable.getMapFromInput are the easiest way to do that with minimal boilerplate. However, sometimes you may need more control over the input and output values. This is where Runnable.fromFunction comes in.

The main differences between Runnable.mapInput and Runnable.fromFunction are:

  • Runnable.fromFunction allows you to define separate logic for invoke vs stream.
  • Runnable.fromFunction allows you to access the invocation options.

In the following example, we use Runnable.fromFunction to log the output value of the previous Runnable. Note that we have print different messages depending on whether the chain is invoked or streamed.

Runnable<T, RunnableOptions, T> logOutput<T extends Object>(String stepName) {
  return Runnable.fromFunction<T, T>(
    invoke: (input, options) {
      print('Output from step "$stepName":\n$input\n---');
      return Future.value(input);
    },
    stream: (inputStream, options) {
      return inputStream.map((input) {
        print('Chunk from step "$stepName":\n$input\n---');
        return input;
      });
    },
  );
}

final promptTemplate = ChatPromptTemplate.fromTemplates(const [
  (
    ChatMessageType.system,
    'Write out the following equation using algebraic symbols then solve it. '
        'Use the format:\nEQUATION:...\nSOLUTION:...\n',
  ),
  (ChatMessageType.human, '{equation_statement}'),
]);

final chain = Runnable.getMapFromInput<String>('equation_statement')
    .pipe(logOutput('getMapFromInput'))
    .pipe(promptTemplate)
    .pipe(logOutput('promptTemplate'))
    .pipe(ChatOpenAI(apiKey: openaiApiKey))
    .pipe(logOutput('chatModel'))
    .pipe(const StringOutputParser())
    .pipe(logOutput('outputParser'));

When we invoke the chain, we get the following output:

await chain.invoke('x raised to the third plus seven equals 12');
// Output from step "getMapFromInput":
// {equation_statement: x raised to the third plus seven equals 12}
// ---
// Output from step "promptTemplate":
// System: Write out the following equation using algebraic symbols then solve it. Use the format
//
// EQUATION:...
// SOLUTION:...
//
// Human: x raised to the third plus seven equals 12
// ---
// Output from step "chatModel":
// ChatResult{
//   id: chatcmpl-9JcVxKcryIhASLnpSRMXkOE1t1R9G,
//   output: AIChatMessage{
//     content:
//       EQUATION: \( x^3 + 7 = 12 \)
//       SOLUTION:
//       Subtract 7 from both sides of the equation:
//       \( x^3 = 5 \)
//
//       Take the cube root of both sides:
//       \( x = \sqrt[3]{5} \)
//
//       Therefore, the solution is \( x = \sqrt[3]{5} \),
//   },
//   finishReason: FinishReason.stop,
//   metadata: {
//     model: gpt-3.5-turbo-0125,
//     created: 1714463309,
//     system_fingerprint: fp_3b956da36b
//   },
//   usage: LanguageModelUsage{
//     promptTokens: 47,
//     responseTokens: 76,
//     totalTokens: 123
//   },
//   streaming: false
// }
// ---
// Output from step "outputParser":
// EQUATION: \( x^3 + 7 = 12 \)
//
// SOLUTION:
// Subtract 7 from both sides of the equation:
// \( x^3 = 5 \)
//
// Take the cube root of both sides:
// \( x = \sqrt[3]{5} \)
//
// Therefore, the solution is \( x = \sqrt[3]{5} \)

When we stream the chain, we get the following output:

chain.stream('x raised to the third plus seven equals 12').listen((_){});
// Chunk from step "getMapFromInput":
// {equation_statement: x raised to the third plus seven equals 12}
// ---
// Chunk from step "promptTemplate":
// System: Write out the following equation using algebraic symbols then solve it. Use the format:
// EQUATION:...
// SOLUTION:...
// 
// Human: x raised to the third plus seven equals 12
// ---
// Chunk from step "chatModel":
// ChatResult{
//   id: chatcmpl-9JcdKMy2yBlJhW2fxVu43Qn0gqofK, 
//   output: AIChatMessage{
//     content: E,
//   },
//   finishReason: FinishReason.unspecified,
//   metadata: {
//     model: gpt-3.5-turbo-0125, 
//     created: 1714463766, 
//     system_fingerprint: fp_3b956da36b
//   },
//   usage: LanguageModelUsage{},
//   streaming: true
// }
// ---
// Chunk from step "outputParser":
// E
// ---
// Chunk from step "chatModel":
// ChatResult{
//   id: chatcmpl-9JcdKMy2yBlJhW2fxVu43Qn0gqofK, 
//   output: AIChatMessage{
//     content: QU,
//   },
//   finishReason: FinishReason.unspecified,
//   metadata: {
//     model: gpt-3.5-turbo-0125, 
//     created: 1714463766, 
//     system_fingerprint: fp_3b956da36b
//   },
//   usage: LanguageModelUsage{},
//   streaming: true
// }
// ---
// Chunk from step "outputParser":
// QU
// ---
// Chunk from step "chatModel":
// ChatResult{
//   id: chatcmpl-9JcdKMy2yBlJhW2fxVu43Qn0gqofK, 
//   output: AIChatMessage{
//     content: ATION,
//   },
//   finishReason: FinishReason.unspecified,
//   metadata: {
//     model: gpt-3.5-turbo-0125, 
//     created: 1714463766, 
//     system_fingerprint: fp_3b956da36b
//   },
//   usage: LanguageModelUsage{},
//   streaming: true
// }
// ---
// Chunk from step "outputParser":
// ATION
// ---
// ...

Migration guide

For most of the cases, you can just use Runnable.mapInput where you were using Runnable.fromFunction. Eg.:

Before:

final chain = Runnable.fromMap<String>({
  'context': retriever | Runnable.fromFunction((docs, options) => docs.join('\n')),
  'question': Runnable.passthrough(),
}) | promptTemplate | model | StringOutputParser();

After::

final chain = Runnable.fromMap<String>({
  'context': retriever | Runnable.mapInput((docs) => docs.join('\n')),
  'question': Runnable.passthrough(),
}) | promptTemplate | model | StringOutputParser();

If you use tear-off instead of passing a lamda, now you can call it directly. Eg.:

  String combineDocuments(
    final List<Document> documents, {
    final String separator = '\n\n',
  }) {
    return documents.map((final d) => d.pageContent).join(separator);
  }

Before:

final context = Runnable.fromMap({
    'context': Runnable.getItemFromMap<String>('standalone_question') |
        retriever |
        Runnable.fromFunction<List<Document>, String>(
          (final docs, final _) => combineDocuments(docs),
        ),
    'question': Runnable.getItemFromMap('standalone_question'),
});

After::

final context = Runnable.fromMap({
    'context': Runnable.getItemFromMap<String>('standalone_question') |
        retriever |
        Runnable.mapInput<List<Document>, String>(combineDocuments),
    'question': Runnable.getItemFromMap('standalone_question'),
});

In case you need to access the invocation options, or you need different logic for invoke vs stream then you can make use of Runnable.fromFunction as shown in the example above.

For more info, check the LangChain Expression Language docs.

@davidmigloz davidmigloz self-assigned this Apr 30, 2024
@davidmigloz davidmigloz added this to the v0.6.0 milestone Apr 30, 2024
@davidmigloz davidmigloz added t:enhancement New feature or request c:lcel LangChain Expression Language p:langchain_core langchain_core package. labels Apr 30, 2024
@davidmigloz davidmigloz merged commit 8bb2b8e into main Apr 30, 2024
1 check passed
@davidmigloz davidmigloz deleted the runnable-function branch April 30, 2024 09:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
c:lcel LangChain Expression Language p:langchain_core langchain_core package. t:enhancement New feature or request
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

None yet

1 participant