-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PC-1044 Streaming runs #422
Conversation
* stream post run query function * add stream runio type + RunResult type * track streaming indices and weather streaming required * handle new streaming chunk * integrate streaming into playground * remove stray console logs * frontend build
@paulcjh I wonder if we should make the streaming endpoint in the container a separate endpoint, as we're doing with the backend API, since previously we've tried to keep |
* update stream function to accumulate chunks until valid json * integrate image streaming into playground. Note this may break for other run types * build frontend
* custom pipeline run output column with tooltip and dropdown streaming mode select * integrate into playground * frontend build
the only other thought I had (and maybe we can always do this later) was whether it makes sense to return the full run result schema every time, or just the raw outputs? Since I have a feeling all of the serialising JSON in and out of Pydantic objects may add a bit of overhead. We could always add a separate endpoint for that later though. Maybe also an endpoint where the results are streamed directly from the resource too without going via catalyst and pcore |
* made streaming mode dropdown styling a bit better * added frontend build js
* unnest streamPostRun * rudimentary implementation of chat interface with streaming * add responseTime + fix localStorage saving * refactor streaming index logic into hook * fix undefined in new hook * make chat both stream and non stream compatible * build frontend
* fixes * Making streaming response more robust * Create helper function for handling stream responses * Add async version of helper func * Small tweaks * improve error handling * cleanup
* port over markdown component from dashboard * render markdown in chat messages * build frontend
* Move Stream class into diff module * Move stream run into its own route * cleanup * blacken
d4e2b29
to
131f593
Compare
* Small refactor * Add test * Add pytest-asyncio dependency * Fix ordering of outputs * formatting * fix formatting in other file
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice to get this in! We've checked over this a fair few times but had a last look and LGTM
Pull request outline
Checklist:
Added:
Changed:
Removed:
Related issues:
none