-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Auto annotation GPU support #2546
Conversation
@jahaniam Thank you for the contribution, I'll test PR today. |
Co-authored-by: Andrey Zhavoronkov <andrey.zhavoronkov@intel.com>
Looking forward to more collaborations. I will slowly add the GPU support for other models as well. Next in line is MaskRCNN or adding support for detection2 framework(by facebook). |
@jahaniam , awesome contribution. I will look at the PR as well. Give us sometime. Really want to add your changes before our next release. |
serverless/tensorflow/faster_rcnn_inception_v2_coco_gpu/nuclio/function.yaml
Outdated
Show resolved
Hide resolved
@nmanovic I searched a lot but I couldn't find the exact place where the response of the nuclio functions are getting processed. For example, for this fast-rcnn, we send an image to the nuclio function and we get the labels and bounding box (rectangle type with points) as a response. But where this information is being processed in cvat? I appreciate if you can pinpoint me to line of the code that is processing the response from nuclio (or sending the request to nuclio). This will help me a lot in bug fixing and further contributing to the CVAT. |
https://github.com/openvinotoolkit/cvat/tree/develop/cvat/apps/lambda_manager |
@jahaniam , great contribution! Thank you very much for you time. 👍 |
Thank you for open-sourcing this awesome software and being so responsive. |
Can you reopen the pr please? |
Motivation and context
Being able to run a model on GPU is one of the main advantages of using deep learning based auto annotation tool.
GPU support was added on nuclio version 1.5.0 on this PR (although they haven't updated their documentation yet). By adding GPU resources to 1 or any positive number, the flag
--gpus=all
will be added during the container creation.Documentation was wrong. It was telling to download the latest nuclio while the dashboard on docker-compose was running on nuclio version 1.4.8. This will result in missmatch and some build issues depending on the versions.
Fixed a bug in fast-rcnn-tensorflow code when the image is png or grayscale (if the image has more or less than 3 channels). This bug might exist in some of the other models as well. I might do more PRs in the future.
This PR is a sample model opening the path for future models as we go users can add more models here.
Still we can send images in batches for future optimizations (not implemented)
This will also cover Nuclio Automatic/SemiAutomatic AI Tool Functions not running on GPU #2489 Serverless component nuclio doesn't support nvidia-docker / GPU acceleration. #1997 improved documentation for component 'Semi-automatic and automatic an… #2273 Could not get models from the server #2541 and some more.
How has this been tested?
by checking docker logs and docker exec into the container and making sure nvidia-smi works and code runs on the GPU.
Checklist
develop
branchcvat-core, cvat-data and cvat-ui)
License
Feel free to contact the maintainers if that's a concern.