-
Notifications
You must be signed in to change notification settings - Fork 11
Initial methods for saving results from ML backend API #327
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
✅ Deploy Preview for ami-storybook canceled.
|
✅ Deploy Preview for ami-web canceled.
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
When a job is queued, the job.run() method is run as a background task with Celery. The task collects all images that need processing and makes batches of POST requests to the ML backend API configured on the selected Pipeline. Results returned from the ML backend API are interpreted and saved as Django model instances. The request and response objects are defined & validated using Pydantic model schemas.
At this time, requests are expected to be processed and returned in a reasonable amount of time (60 sec). So batches are kept very small. In the future, large batches will need a different method for checking the status & retrieving results from each ML backend API.
Also the location and method of where the celery background tasks are being called should be reviewed. Does each Pipeline type get it's own
run()method? Should job.run() be synchronous and the methods within that function be the async functions? Then we can use group and chain based on the specific processing that needs to happen. It's important that we save results periodically and update the user interface.