Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dockerize the "Remote GPU" service #224

Merged
merged 11 commits into from Jul 10, 2020
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
28 changes: 28 additions & 0 deletions Dockerfile
@@ -0,0 +1,28 @@
FROM nvcr.io/nvidia/cuda:10.0-cudnn7-runtime-ubuntu18.04

RUN DEBIAN_FRONTEND=noninteractive apt-get -qq update \
&& DEBIAN_FRONTEND=noninteractive apt-get -qqy install curl python3-pip python3-tk ffmpeg git less nano libsm6 libxext6 libxrender-dev \
&& rm -rf /var/lib/apt/lists/*

ARG PYTORCH_WHEEL="https://download.pytorch.org/whl/cu100/torch-1.0.0-cp36-cp36m-linux_x86_64.whl"
ARG FACE_ALIGNMENT_GIT="git+https://github.com/1adrianb/face-alignment"
ARG AVATARIFY_COMMIT="01db88c8580b982278ae944b89b3bfab5d98c1dd"
ARG FOMM_COMMIT="efbe0a6f17b38360ff9a446fddfbb3ce5493534c"

RUN git clone https://github.com/alievk/avatarify.git /app/avatarify && cd /app/avatarify && git checkout ${AVATARIFY_COMMIT} \
&& git clone https://github.com/alievk/first-order-model.git /app/avatarify/fomm && cd /app/avatarify/fomm && git checkout ${FOMM_COMMIT}

WORKDIR /app/avatarify

RUN bash scripts/download_data.sh

RUN pip3 install ${PYTORCH_WHEEL} ${FACE_ALIGNMENT_GIT} -r requirements.txt \
&& pip3 install ${PYTORCH_WHEEL} ${FACE_ALIGNMENT_GIT} -r fomm/requirements.txt \
&& rm -rf /root/.cache/pip

ENV PYTHONPATH="/app/avatarify:/app/avatarify/fomm"

EXPOSE 5557
EXPOSE 5558

CMD ["python3", "afy/cam_fomm.py", "--config", "fomm/config/vox-adv-256.yaml", "--checkpoint", "vox-adv-cpk.pth.tar", "--virt-cam", "9", "--relative", "--adapt_scale", "--is-worker"]
23 changes: 21 additions & 2 deletions README.md
Expand Up @@ -74,6 +74,8 @@ Download model's weights from [Dropbox](https://www.dropbox.com/s/t7h24l6wx9vret
#### Linux
Linux uses `v4l2loopback` to create virtual camera.

### Native installation
alievk marked this conversation as resolved.
Show resolved Hide resolved

<!--- 1. Install [CUDA](https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64). --->
1. Download [Miniconda Python 3.7](https://docs.conda.io/en/latest/miniconda.html#linux-installers) and install using command:
```bash
Expand All @@ -87,6 +89,18 @@ bash scripts/install.sh
```
3. [Download network weights](#download-network-weights) and place `vox-adv-cpk.pth.tar` file in the `avatarify` directory (don't unpack it).

### Docker installation
1. Install Docker following the [Documentation](https://docs.docker.com/engine/install/). Then run this [step]{https://docs.docker.com/engine/install/linux-postinstall/#manage-docker-as-a-non-root-user} to make docker available for your user.
alievk marked this conversation as resolved.
Show resolved Hide resolved
2. For using the gpu (hardly recommended): Install nvidia drivers and [nvidia docker](https://github.com/NVIDIA/nvidia-docker#quickstart).
2. Clone `avatarify`:
```bash
git clone https://github.com/alievk/avatarify.git
```
3. Build the Dockerfile:
```bash
docker build -t avatarify
```

#### Mac
*(!) Note*: we found out that in versions after [v4.6.8 (March 23, 2020)](https://zoom.us/client/4.6.19178.0323/ZoomInstaller.pkg) Zoom disabled support for virtual cameras on Mac. To use Avatarify in Zoom you can choose from 2 options:
- Install [Zoom v4.6.8](https://zoom.us/client/4.6.19178.0323/ZoomInstaller.pkg) which is the last version that supports virtual cameras
Expand Down Expand Up @@ -137,7 +151,7 @@ The steps 10-11 are required only once during setup.

#### Remote GPU

You can offload the heavy work to [Google Colab](https://colab.research.google.com/github/alievk/avatarify/blob/master/avatarify.ipynb) or a [server with a GPU](https://github.com/alievk/avatarify/wiki/Remote-GPU) and use your laptop just to communicate the video stream.
You can offload the heavy work to [Google Colab](https://colab.research.google.com/github/alievk/avatarify/blob/master/avatarify.ipynb) or a [server with a GPU](https://github.com/alievk/avatarify/wiki/Remote-GPU) and use your laptop just to communicate the video stream. The server and client software are native and dockerized available.

## Setup avatars
Avatarify comes with a standard set of avatars of famous people, but you can extend this set simply copying your avatars into `avatars` folder.
Expand All @@ -158,11 +172,16 @@ The run script will create virtual camera `/dev/video9`. You can change these se
<!--It is supposed that there is only one web cam connected to the computer at `/dev/video0`.-->
You can use command `v4l2-ctl --list-devices` to list all devices in your system.

Run:
alievk marked this conversation as resolved.
Show resolved Hide resolved
With native installation run:
```bash
bash run.sh
```

With docker installation run:
```bash
bash run.sh --docker
```

`cam` and `avatarify` windows will pop-up. The `cam` window is for controlling your face position and `avatarify` is for the avatar animation preview. Please follow these [recommendations](#driving-your-avatar) to drive your avatars.

#### Mac
Expand Down
104 changes: 84 additions & 20 deletions run.sh
Expand Up @@ -5,11 +5,16 @@
ENABLE_CONDA=1
ENABLE_VCAM=1
KILL_PS=1
USE_DOCKER=0
IS_WORKER=0
IS_CLIENT=0
DOCKER_IS_LOCAL_CLIENT=0

FOMM_CONFIG=fomm/config/vox-adv-256.yaml
FOMM_CKPT=vox-adv-cpk.pth.tar

ARGS=""
DOCKER_ARGS=""

while (( "$#" )); do
case "$1" in
Expand All @@ -26,6 +31,31 @@ while (( "$#" )); do
KILL_PS=0
shift
;;
--docker)
USE_DOCKER=1
shift
;;
--gpus)
DOCKER_ARGS="$DOCKER_ARGS --gpus all"
shift
;;
--is-worker)
IS_WORKER=1
ARGS="$ARGS $1"
DOCKER_ARGS="$DOCKER_ARGS -p 5557:5557 -p 5558:5558"
shift
;;
--is-client)
IS_CLIENT=1
ARGS="$ARGS $1"
shift
;;
--is-local-client)
IS_CLIENT=1
DOCKER_IS_LOCAL_CLIENT=1
ARGS="$ARGS --is-client"
shift
;;
*|-*|--*)
ARGS="$ARGS $1"
shift
Expand All @@ -35,28 +65,62 @@ done

eval set -- "$ARGS"

if [[ $KILL_PS == 1 ]]; then
kill -9 $(ps aux | grep 'afy/cam_fomm.py' | awk '{print $2}') 2> /dev/null
fi

source scripts/settings.sh

if [[ $ENABLE_VCAM == 1 ]]; then
bash scripts/create_virtual_camera.sh
fi

if [[ $ENABLE_CONDA == 1 ]]; then
source $(conda info --base)/etc/profile.d/conda.sh
conda activate $CONDA_ENV_NAME
fi
if [[ $USE_DOCKER == 0 ]]; then

if [[ $KILL_PS == 1 ]]; then
kill -9 $(ps aux | grep 'afy/cam_fomm.py' | awk '{print $2}') 2> /dev/null
fi

source scripts/settings.sh

if [[ $ENABLE_VCAM == 1 ]]; then
bash scripts/create_virtual_camera.sh
fi

if [[ $ENABLE_CONDA == 1 ]]; then
source $(conda info --base)/etc/profile.d/conda.sh
conda activate $CONDA_ENV_NAME
fi

export PYTHONPATH=$PYTHONPATH:$(pwd):$(pwd)/fomm

python afy/cam_fomm.py \
--config $FOMM_CONFIG \
--checkpoint $FOMM_CKPT \
--virt-cam $CAMID_VIRT \
--relative \
--adapt_scale \
$@
else
xhost +local:root
source scripts/settings.sh

if [[ $ENABLE_VCAM == 1 ]]; then
bash scripts/create_virtual_camera.sh
fi

export PYTHONPATH=$PYTHONPATH:$(pwd):$(pwd)/fomm
if [[ $DOCKER_IS_LOCAL_CLIENT == 1 ]]; then
DOCKER_ARGS="$DOCKER_ARGS --network=host"
elif [[ $IS_CLIENT == 1 ]]; then
DOCKER_ARGS="$DOCKER_ARGS -p 5557:5554 -p 5557:5558"
fi

python afy/cam_fomm.py \
--config $FOMM_CONFIG \
--checkpoint $FOMM_CKPT \
--virt-cam $CAMID_VIRT \
--relative \
--adapt_scale \
$@

docker run $DOCKER_ARGS -it --rm --privileged \
-v $PWD:/root/.torch/models \
-v $PWD/avatars:/app/avatarify/avatars \
--env="DISPLAY" \
--env="QT_X11_NO_MITSHM=1" \
--volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \
avatarify python3 afy/cam_fomm.py \
--config $FOMM_CONFIG \
--checkpoint $FOMM_CKPT \
--virt-cam $CAMID_VIRT \
--relative \
--adapt_scale \
$@

xhost -local:root
fi