Replies: 2 comments
-
Hey @qiankunli! I'm here to help you with any bugs, questions, or contributions you have in mind. Let's tackle this together! Yes, the LlamaIndex Workflow supports streaming the LLM's output using the The In your example, the To continue talking to Dosu, mention @dosu. Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other | Bug Report |
Beta Was this translation helpful? Give feedback.
-
@qiankunli use the event streaming api For example
https://docs.llamaindex.ai/en/stable/module_guides/workflow/#streaming-events |
Beta Was this translation helpful? Give feedback.
-
In many cases, a Workflow step is a LLM call. For such steps, does Workflow provide any support to stream the LLM's output through methods like
async_response_gen
, This way, we can stream the responses of a multi-step/multi-turn LLM inference task to the front end, thereby improving user experience.output
If async_response_gen supports this request. Can you explain the principle of async_response_gen, and how the LLM response is injected into the workflow through an LLM call?
Beta Was this translation helpful? Give feedback.
All reactions