New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow caching generators and async generators #4927
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
All the demos for this PR have been deployed at https://huggingface.co/spaces/gradio-pr-deploys/pr-4927-all-demos You can install the changes in this PR by running: pip install https://gradio-builds.s3.amazonaws.com/9137f1caa015ba361ab2dacdd9b0071645a692ef/gradio-3.36.1-py3-none-any.whl |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me @abidlabs - nice we can finally support this
Thanks for the awesome fast review @freddyaboulton |
Previously, we couldn't cache examples if the function was a generator or an async generator. This fixes that by getting the last yielded value and using that as the output. I also add a print statement to print out the status of caching. If there's a generator that takes a long time to finish iterating, users can notice that something went wrong and terminate (as suggested by @hysts)
Example:
This will be useful in
gr.ChatInterface
so that users can cache examples for streaming chatbots.Closes: #3570