Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dockerfile #51

Open
Dutch77 opened this issue Nov 4, 2022 · 19 comments
Open

Dockerfile #51

Dutch77 opened this issue Nov 4, 2022 · 19 comments

Comments

@Dutch77
Copy link

Dutch77 commented Nov 4, 2022

Not an issue but I think this could come handy for someone :)

Dockerfile

FROM nvidia/cuda:10.1-cudnn7-devel-ubuntu18.04

RUN apt-get update && apt-get install -y software-properties-common
RUN add-apt-repository ppa:deadsnakes/ppa -y
RUN apt-get update && apt-get install -y \
    wget \
    python3.8 \
    python3.8-distutils \
    ffmpeg \
    libsm6 \
    libxext6

RUN wget https://bootstrap.pypa.io/get-pip.py

RUN python3.8 get-pip.py

COPY requirements.txt requirements.txt

RUN pip install -r requirements.txt

python3.8 inference.py --target_path {PATH_TO_IMAGE} --image_to_image True

@jabhishek87
Copy link

/assign

@jabhishek87
Copy link

can i work on this ?

@Dutch77
Copy link
Author

Dutch77 commented Jan 30, 2023

update to cuda 11.3 for new graphic cards support

FROM nvidia/cuda:11.3.0-cudnn8-runtime-ubuntu20.04

ENV TZ=Europe/Prague
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone

WORKDIR /workspace/
RUN apt-get update && apt-get install -y software-properties-common && \
add-apt-repository ppa:deadsnakes/ppa -y && \
 apt-get update && apt-get install -y \
    wget \
    vim \
    python3.8 \
    python3.8-distutils \
    ffmpeg \
    libsm6 \
    libxext6 && \
apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

RUN wget https://bootstrap.pypa.io/get-pip.py

RUN python3.8 get-pip.py

COPY ./ /workspace/

RUN pip install -r requirements.txt

EXPOSE 80
ENV PORT 80

ENTRYPOINT []
CMD ["python3.8", "/workspace/server.py"]

numpy==1.21.5
-f https://download.pytorch.org/whl/torch_stable.html
torch==1.12.1+cu113
-f https://download.pytorch.org/whl/torch_stable.html
torchvision==0.13.1+cu113
opencv-python
onnx==1.9.0
onnxruntime-gpu==1.9.0
mxnet-cu113
scikit-image
insightface==0.2.1
requests==2.25.1
kornia==0.5.4
dill
wandb
protobuf==3.20

@dyargici
Copy link

dyargici commented Feb 4, 2023

Your latest requirements return this for me :/

  Downloading onnxruntime_gpu-1.9.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (95.6 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 95.6/95.6 MB 2.4 MB/s eta 0:00:00
ERROR: Could not find a version that satisfies the requirement mxnet-cu113mkl (from versions: none)
ERROR: No matching distribution found for mxnet-cu113mkl
The command '/bin/sh -c pip install -r requirements.txt' returned a non-zero code: 1

@Dutch77
Copy link
Author

Dutch77 commented Feb 4, 2023

sorry, I made a mistake. Change mxnet-cu113mkl to mxnet-cu113. It should work. @dyargici

@dyargici
Copy link

dyargici commented Feb 4, 2023

sorry, I made a mistake. Change mxnet-cu113mkl to mxnet-cu113. It should work. @dyargici

Thanks, I'll try that. You also seem to have exposed ports and created a server.py in your latest revision, could you explain a little about how you implemented this setup? Thanks!

@Dutch77
Copy link
Author

Dutch77 commented Feb 4, 2023

I've just made a simple API for this project. Code is a mess. I was lazy and I'm not really a python fan, so I've just made it work.
ghost.zip

@dyargici
Copy link

dyargici commented Feb 4, 2023

Thanks for sharing! I'm new to using docker so I'll see if I can get it running. This has been a great learning process for me!

@dyargici
Copy link

dyargici commented Feb 8, 2023

I get this error now. Looks like a cuda library is not where mxnet expects it in Ubuntu. I also tried switching back to devel from runtime (which I noticed you were using for the 18.04 version) and still got the same result.

  File "/workspace/inference.py", line 13, in <module>
    from coordinate_reg.image_infer import Handler
  File "/workspace/coordinate_reg/image_infer.py", line 4, in <module>
    import mxnet as mx
  File "/usr/local/lib/python3.8/dist-packages/mxnet/__init__.py", line 23, in <module>
    from .context import Context, current_context, cpu, gpu, cpu_pinned
  File "/usr/local/lib/python3.8/dist-packages/mxnet/context.py", line 23, in <module>
    from .base import classproperty, with_metaclass, _MXClassPropertyMetaClass
  File "/usr/local/lib/python3.8/dist-packages/mxnet/base.py", line 356, in <module>
    _LIB = _load_lib()
  File "/usr/local/lib/python3.8/dist-packages/mxnet/base.py", line 347, in _load_lib
    lib = ctypes.CDLL(lib_path[0], ctypes.RTLD_LOCAL)
  File "/usr/lib/python3.8/ctypes/__init__.py", line 373, in __init__
    self._handle = _dlopen(self._name, mode)
OSError: libcuda.so.1: cannot open shared object file: No such file or directory

@Dutch77
Copy link
Author

Dutch77 commented Feb 8, 2023

Do you have cuda installed on your PC? Must be same or newer than in docker image (11.3)
Also you need pass gpus to docker

Also you need to install nvidia-container-toolkit and assign gpu to docker image:

docker run --gpus device= .... see docs here https://github.com/NVIDIA/nvidia-docker/wiki

I'm using docker-compose, something like this

version: '2'
services:
  ghost-face-swap:
    build: .
    environment:
      PORT: '8080'
    deploy:
      resources:
        reservations:
          devices:
            - capabilities: [gpu]
              count: all

@dyargici
Copy link

dyargici commented Feb 8, 2023

Thanks for the info, I have 11.7 locally. I'll investigate nvidia-container-toolkit

@dyargici
Copy link

dyargici commented Feb 8, 2023

I'm gonna be honest, from everything I'd heard about docker I always imagined it would be the most practical way to get something like this up and running, but I think in this instance it's actually just more straightforward for me to setup a venv and run from there as I already have experience with this.

Thanks for all you time and help!

@Dutch77
Copy link
Author

Dutch77 commented Feb 8, 2023

:D you haven't picked the simplest docker image for learning. In most cases what you say is true. And it's true even for my case, because after you build the image, you can push it to repository and if another device has docker, cuda compatible gpu and nvidia-container-toolkit, you can just pull the the whole image and run it without any further configuration or installation. Simple as you wrote. Docker is excellent solution if you need to deploy these kind of services on multiple devices. Also you have ensured same os, pip packages, ... because it's baked in the image, so no suprises, every image behaves the same.

@dyargici
Copy link

dyargici commented Feb 9, 2023

Haha, true. I did have a go in the end with nvidia-container-toolkit but hit another niche snag that I think is probably just due to my having an ancient mobile GPU. Was actually a great learning experience for me in any case and I think I'll be able to get other projects up and running very quickly with the things I've learned.

@thegenerativegeneration
Copy link

I also made a Dockerfile, though I had to use mxnet and onnx on CPU:

https://hub.docker.com/r/wawa9000/ghost (models for inference are included)

@ASparkOfFire
Copy link

@Dutch77 Hi, Can you share the example webpage or PostMan settings to test the API?

@lDark-Moonl
Copy link

can you share an example to run the docker container?

@robbsaber
Copy link

Traceback (most recent call last):
File "/workspace/server.py", line 1, in
from flask import Flask, request, make_response, send_from_directory
ModuleNotFoundError: No module named 'flask'

Not an issue but I think this could come handy for someone :)

Dockerfile

FROM nvidia/cuda:10.1-cudnn7-devel-ubuntu18.04

RUN apt-get update && apt-get install -y software-properties-common
RUN add-apt-repository ppa:deadsnakes/ppa -y
RUN apt-get update && apt-get install -y \
    wget \
    python3.8 \
    python3.8-distutils \
    ffmpeg \
    libsm6 \
    libxext6

RUN wget https://bootstrap.pypa.io/get-pip.py

RUN python3.8 get-pip.py

COPY requirements.txt requirements.txt

RUN pip install -r requirements.txt

python3.8 inference.py --target_path {PATH_TO_IMAGE} --image_to_image True

docker run ghost

Traceback (most recent call last):
File "/workspace/server.py", line 1, in
from flask import Flask, request, make_response, send_from_directory
ModuleNotFoundError: No module named 'flask'

@parth7326
Copy link

I've just made a simple API for this project. Code is a mess. I was lazy and I'm not really a python fan, so I've just made it work.
ghost.zip

Can you tell me if we need to install exact requirements file to get it working or docker?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants