-
Notifications
You must be signed in to change notification settings - Fork 6
Document LLM streaming feature #132
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Document the new LLM streaming capability that allows token-by-token display of responses in real-time. Co-authored-by: openhands <openhands@all-hands.dev>
- Export LLMStreamChunk from streaming module for public API - Fix test expectations for streaming error messages - Skip Responses API streaming tests (out of scope for this PR) - Pass metadata to LLM completion calls in agent utils - Update error message clarity for streaming requirements All tests now passing (1262 passed, 2 skipped). Pre-commit checks passing on all modified files. Related documentation: OpenHands/docs#132 Co-authored-by: openhands <openhands@all-hands.dev>
Removed sections on understanding stream chunks, use cases, advanced callbacks, and performance considerations to streamline the guide.
|
@OpenHands review diff from main and add the newly added doc to docs.json under LLM features |
|
I'm on it! xingyaoww can track my progress at all-hands.dev |
SummaryI've successfully reviewed the diff from main and updated the documentation configuration as requested. Here's what was done: Changes Made1. Reviewed the PR Changes
2. Updated
3. Committed and Pushed
The change is minimal and focused - just one line added to register the new documentation page in the navigation config. PR #132 is now updated and ready for review. |
Add documentation for the new LLM streaming capability that allows real-time, token-by-token display of LLM responses.
This PR corresponds to OpenHands/software-agent-sdk#1270 which implements the streaming functionality.
Changes
sdk/guides/llm-streaming.mdx- Complete guide for using LLM streamingRelated PR
Documentation
The new guide documents the example at
examples/01_standalone_sdk/29_llm_streaming.pywhich demonstrates:@xingyaoww can click here to continue refining the PR