-
Notifications
You must be signed in to change notification settings - Fork 11
Implement complete example of an ML backend #798
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement complete example of an ML backend #798
Conversation
✅ Deploy Preview for antenna-preview canceled.
|
|
@mohamedelabbas1996 I'll be opening this PR soon for review and it contains the changes i've made to make the ML backend framework easier to customize |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copilot reviewed 20 out of 22 changed files in this pull request and generated 3 comments.
Files not reviewed (2)
- processing_services/example/requirements.txt: Language not supported
- processing_services/minimal/Dockerfile: Language not supported
Comments suppressed due to low confidence (1)
processing_services/example/api/test.py:30
- The test hardcodes expected classification labels which might lead to flaky tests if the classifier output is non-deterministic; consider mocking the classifier output for consistent results.
expected_labels = ["lynx, catamount", "beaver"]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copilot reviewed 20 out of 22 changed files in this pull request and generated no comments.
Files not reviewed (2)
- processing_services/example/requirements.txt: Language not supported
- processing_services/minimal/Dockerfile: Language not supported
Comments suppressed due to low confidence (3)
processing_services/minimal/api/utils.py:39
- Consider using a more specific exception type (e.g. ValueError) instead of a generic Exception for invalid input.
raise Exception("Specify a URL or path to fetch file from.")
processing_services/example/api/test.py:30
- [nitpick] Hardcoded expected classification labels might be brittle if model outputs change; consider deriving expected values dynamically or using a fixed mock response.
expected_labels = ["lynx, catamount", "beaver"]
processing_services/docker-compose.yml:23
- Verify that the port mapping change for the ml_backend_example service (mapping to port 2005) is intentional and does not conflict with other services.
- "2005:2000"
|
@mohamedelabbas1996 I was playing around with adding the Darsa flat-bug detector. My latest commit does some preliminary basic work cloning the repo, setting up the docker environment, and adding the detector pipeline. The pipeline does run, but predicting on the test images in the database don't seem to be producing bounding boxes (might be a problem with the way I'm loading the image? since on colab, the image I'm using should produce 1 bounding box at least. In the pipeline I'm using the This is the logs showing what the detector is predicting when running the model on cpu locally: (Here's a colab link showing what the expected output should be: https://colab.research.google.com/drive/1GNVH4y8hrG49-2kqVDy0oxrHGoXxHc_P?usp=sharing -- there should be 1 bounding box) Relates to #412 |
…void shadowing the FlatBugDetector model
|
TODOs:
TLDR; To close this PR, goal is to have complete READMEs explaining how to make new PS and run it; have a practical example of a working PS (ideally zero shot detector AND classifier) |
|
@mihow PR has been updated to include the zero-shot object detector example. It works quite nicely! It identifies it as butterflies/insects/moths. I've also updated the READMEs with all the details. In a follow up PR we can address the following points:
|
mihow
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Epic work Vanessa! Merging!

Summary
Create a template/framework for running ML backends locally. Users can define detectors, classifiers, and pipelines wrapped in the ML backend FastAPI.
For a detailed description of how to use the framework, see the READMEs.
README.mdprocessing_services/README.mdprocessing_servicescontains 2 apps:example: demos how to add custom pipelines/algorithms.minimal: a simple ML backend for basic testing of the processing service API. This minimal app also runs within the main Antenna docker compose stack.The ML backends can now be run as a separate docker compose stack (i.e. with one or both of
exampleandminimalprocessing service(s) running).NOTE: The
ml_backendservice inside of the main Antenna docker compose stack is built from theminimalML backend app.Related Issues
#802
Screenshots
Deployment Notes
See
processing_services/example/README.md