-
Notifications
You must be signed in to change notification settings - Fork 258
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dockerfile #51
Comments
/assign |
can i work on this ? |
update to cuda 11.3 for new graphic cards support
|
Your latest requirements return this for me :/
|
sorry, I made a mistake. Change |
Thanks, I'll try that. You also seem to have exposed ports and created a server.py in your latest revision, could you explain a little about how you implemented this setup? Thanks! |
I've just made a simple API for this project. Code is a mess. I was lazy and I'm not really a python fan, so I've just made it work. |
Thanks for sharing! I'm new to using docker so I'll see if I can get it running. This has been a great learning process for me! |
I get this error now. Looks like a cuda library is not where mxnet expects it in Ubuntu. I also tried switching back to devel from runtime (which I noticed you were using for the 18.04 version) and still got the same result.
|
Do you have cuda installed on your PC? Must be same or newer than in docker image (11.3) Also you need to install docker run --gpus device= .... see docs here https://github.com/NVIDIA/nvidia-docker/wiki I'm using docker-compose, something like this
|
Thanks for the info, I have 11.7 locally. I'll investigate nvidia-container-toolkit |
I'm gonna be honest, from everything I'd heard about docker I always imagined it would be the most practical way to get something like this up and running, but I think in this instance it's actually just more straightforward for me to setup a venv and run from there as I already have experience with this. Thanks for all you time and help! |
:D you haven't picked the simplest docker image for learning. In most cases what you say is true. And it's true even for my case, because after you build the image, you can push it to repository and if another device has docker, cuda compatible gpu and nvidia-container-toolkit, you can just pull the the whole image and run it without any further configuration or installation. Simple as you wrote. Docker is excellent solution if you need to deploy these kind of services on multiple devices. Also you have ensured same os, pip packages, ... because it's baked in the image, so no suprises, every image behaves the same. |
Haha, true. I did have a go in the end with nvidia-container-toolkit but hit another niche snag that I think is probably just due to my having an ancient mobile GPU. Was actually a great learning experience for me in any case and I think I'll be able to get other projects up and running very quickly with the things I've learned. |
I also made a Dockerfile, though I had to use mxnet and onnx on CPU: https://hub.docker.com/r/wawa9000/ghost (models for inference are included) |
@Dutch77 Hi, Can you share the example webpage or PostMan settings to test the API? |
can you share an example to run the docker container? |
Traceback (most recent call last):
docker run ghost Traceback (most recent call last): |
Can you tell me if we need to install exact requirements file to get it working or docker? |
Not an issue but I think this could come handy for someone :)
Dockerfile
python3.8 inference.py --target_path {PATH_TO_IMAGE} --image_to_image True
The text was updated successfully, but these errors were encountered: