-
Notifications
You must be signed in to change notification settings - Fork 74
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Great project, hoping to support chain's formatted output. #51
Comments
hi! that sounds like a good idea! do you maybe have a format in mind? Maybe the streaming output could be something like
and at the end, it can be:
let me know what you think @gaye746560359 |
Your idea is great and meets the requirements. Alternatively, you can refer to the response format of chat.openai.com and use a similar format like: {"token": ["hello", ...], "sources": [...]} or add custom response prefixes and suffixes, such as: {"prefix":"We have found the following answers for you:", "token":["hello", ...], "sources":[...], "suffix":"This information is confidential."} Additionally, performance, response speed, and ease of API call reading should be considered. |
one:{"prefix":"We have found the following answers for you:", "token":["hello"], "sources":[...], " suffix":"This information is confidential."} |
@gaye746560359 hi! i've added support for JSON streaming in
let me know if it solves your use case! feel free to open new issues for any improvements or bug fixes. cheers! |
Some chains may output the sources of answers. It is hoped to increase the content of streaming output in JSON format for easy extraction and display on the web.
The text was updated successfully, but these errors were encountered: