Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dockerize the "Remote GPU" service #224

Merged
merged 11 commits into from Jul 10, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
28 changes: 28 additions & 0 deletions Dockerfile
@@ -0,0 +1,28 @@
FROM nvcr.io/nvidia/cuda:10.0-cudnn7-runtime-ubuntu18.04

RUN DEBIAN_FRONTEND=noninteractive apt-get -qq update \
&& DEBIAN_FRONTEND=noninteractive apt-get -qqy install curl python3-pip python3-tk ffmpeg git less nano libsm6 libxext6 libxrender-dev \
&& rm -rf /var/lib/apt/lists/*

ARG PYTORCH_WHEEL="https://download.pytorch.org/whl/cu100/torch-1.0.0-cp36-cp36m-linux_x86_64.whl"
ARG FACE_ALIGNMENT_GIT="git+https://github.com/1adrianb/face-alignment"
ARG AVATARIFY_COMMIT="01db88c8580b982278ae944b89b3bfab5d98c1dd"
ARG FOMM_COMMIT="efbe0a6f17b38360ff9a446fddfbb3ce5493534c"

RUN git clone https://github.com/alievk/avatarify.git /app/avatarify && cd /app/avatarify && git checkout ${AVATARIFY_COMMIT} \
&& git clone https://github.com/alievk/first-order-model.git /app/avatarify/fomm && cd /app/avatarify/fomm && git checkout ${FOMM_COMMIT}

WORKDIR /app/avatarify

RUN bash scripts/download_data.sh

RUN pip3 install ${PYTORCH_WHEEL} ${FACE_ALIGNMENT_GIT} -r requirements.txt \
&& pip3 install ${PYTORCH_WHEEL} ${FACE_ALIGNMENT_GIT} -r fomm/requirements.txt \
&& rm -rf /root/.cache/pip

ENV PYTHONPATH="/app/avatarify:/app/avatarify/fomm"

EXPOSE 5557
EXPOSE 5558

CMD ["python3", "afy/cam_fomm.py", "--config", "fomm/config/vox-adv-256.yaml", "--checkpoint", "vox-adv-cpk.pth.tar", "--virt-cam", "9", "--relative", "--adapt_scale", "--is-worker"]
20 changes: 19 additions & 1 deletion README.md
Expand Up @@ -32,6 +32,7 @@ Created by: GitHub community.
- [Mac](#mac)
- [Windows](#windows)
- [Remote GPU](#remote-gpu)
- [Docker](#docker)
- [Setup avatars](#setup-avatars)
- [Run](#run)
- [Linux](#linux-1)
Expand Down Expand Up @@ -87,6 +88,7 @@ bash scripts/install.sh
```
3. [Download network weights](#download-network-weights) and place `vox-adv-cpk.pth.tar` file in the `avatarify` directory (don't unpack it).


#### Mac
*(!) Note*: we found out that in versions after [v4.6.8 (March 23, 2020)](https://zoom.us/client/4.6.19178.0323/ZoomInstaller.pkg) Zoom disabled support for virtual cameras on Mac. To use Avatarify in Zoom you can choose from 2 options:
- Install [Zoom v4.6.8](https://zoom.us/client/4.6.19178.0323/ZoomInstaller.pkg) which is the last version that supports virtual cameras
Expand Down Expand Up @@ -137,8 +139,23 @@ The steps 10-11 are required only once during setup.

#### Remote GPU

You can offload the heavy work to [Google Colab](https://colab.research.google.com/github/alievk/avatarify/blob/master/avatarify.ipynb) or a [server with a GPU](https://github.com/alievk/avatarify/wiki/Remote-GPU) and use your laptop just to communicate the video stream.
You can offload the heavy work to [Google Colab](https://colab.research.google.com/github/alievk/avatarify/blob/master/avatarify.ipynb) or a [server with a GPU](https://github.com/alievk/avatarify/wiki/Remote-GPU) and use your laptop just to communicate the video stream. The server and client software are native and dockerized available.

### Docker
Docker images are only availabe on Linux.

1. Install Docker following the [Documentation](https://docs.docker.com/engine/install/). Then run this [step](https://docs.docker.com/engine/install/linux-postinstall/#manage-docker-as-a-non-root-user) to make docker available for your user.
2. For using the gpu (hardly recommended): Install nvidia drivers and [nvidia docker](https://github.com/NVIDIA/nvidia-docker#quickstart).
3. Clone `avatarify` and install its dependencies (v4l2loopback kernel module):
```bash
git clone https://github.com/alievk/avatarify.git
cd avatarify
bash scripts/install_docker.sh
```
4. Build the Dockerfile:
```bash
docker build -t avatarify
```
## Setup avatars
Avatarify comes with a standard set of avatars of famous people, but you can extend this set simply copying your avatars into `avatars` folder.

Expand All @@ -162,6 +179,7 @@ Run:
```bash
bash run.sh
```
If you haven't installed a GPU add the `--no-gpus` flag. In order to use Docker add the `--docker` flag.

`cam` and `avatarify` windows will pop-up. The `cam` window is for controlling your face position and `avatarify` is for the avatar animation preview. Please follow these [recommendations](#driving-your-avatar) to drive your avatars.

Expand Down
125 changes: 106 additions & 19 deletions run.sh
Expand Up @@ -5,11 +5,17 @@
ENABLE_CONDA=1
ENABLE_VCAM=1
KILL_PS=1
USE_DOCKER=0
IS_WORKER=0
IS_CLIENT=0
DOCKER_IS_LOCAL_CLIENT=0
DOCKER_NO_GPU=0

FOMM_CONFIG=fomm/config/vox-adv-256.yaml
FOMM_CKPT=vox-adv-cpk.pth.tar

ARGS=""
DOCKER_ARGS=""

while (( "$#" )); do
case "$1" in
Expand All @@ -26,6 +32,31 @@ while (( "$#" )); do
KILL_PS=0
shift
;;
--docker)
USE_DOCKER=1
shift
;;
--no-gpus)
DOCKER_NO_GPU=1
shift
;;
--is-worker)
IS_WORKER=1
ARGS="$ARGS $1"
DOCKER_ARGS="$DOCKER_ARGS -p 5557:5557 -p 5558:5558"
shift
;;
--is-client)
IS_CLIENT=1
ARGS="$ARGS $1"
shift
;;
--is-local-client)
IS_CLIENT=1
DOCKER_IS_LOCAL_CLIENT=1
ARGS="$ARGS --is-client"
shift
;;
*|-*|--*)
ARGS="$ARGS $1"
shift
Expand All @@ -35,28 +66,84 @@ done

eval set -- "$ARGS"

if [[ $KILL_PS == 1 ]]; then
kill -9 $(ps aux | grep 'afy/cam_fomm.py' | awk '{print $2}') 2> /dev/null
fi

source scripts/settings.sh

if [[ $ENABLE_VCAM == 1 ]]; then
bash scripts/create_virtual_camera.sh
fi
if [[ $USE_DOCKER == 0 ]]; then

if [[ $KILL_PS == 1 ]]; then
kill -9 $(ps aux | grep 'afy/cam_fomm.py' | awk '{print $2}') 2> /dev/null
fi

source scripts/settings.sh

if [[ $ENABLE_VCAM == 1 ]]; then
bash scripts/create_virtual_camera.sh
fi

if [[ $ENABLE_CONDA == 1 ]]; then
source $(conda info --base)/etc/profile.d/conda.sh
conda activate $CONDA_ENV_NAME
fi

export PYTHONPATH=$PYTHONPATH:$(pwd):$(pwd)/fomm

python afy/cam_fomm.py \
--config $FOMM_CONFIG \
--checkpoint $FOMM_CKPT \
--virt-cam $CAMID_VIRT \
--relative \
--adapt_scale \
$@
else

if [[ $ENABLE_CONDA == 1 ]]; then
source $(conda info --base)/etc/profile.d/conda.sh
conda activate $CONDA_ENV_NAME
fi
source scripts/settings.sh

if [[ $ENABLE_VCAM == 1 ]]; then
bash scripts/create_virtual_camera.sh
fi

if [[ $DOCKER_NO_GPU == 0 ]]; then
DOCKER_ARGS="$DOCKER_ARGS --gpus all"
fi

if [[ $DOCKER_IS_LOCAL_CLIENT == 1 ]]; then
DOCKER_ARGS="$DOCKER_ARGS --network=host"
elif [[ $IS_CLIENT == 1 ]]; then
DOCKER_ARGS="$DOCKER_ARGS -p 5557:5554 -p 5557:5558"
fi

export PYTHONPATH=$PYTHONPATH:$(pwd):$(pwd)/fomm

python afy/cam_fomm.py \
--config $FOMM_CONFIG \
--checkpoint $FOMM_CKPT \
--virt-cam $CAMID_VIRT \
--relative \
--adapt_scale \
$@

if [[ $IS_WORKER == 0 ]]; then
xhost +local:root
docker run $DOCKER_ARGS -it --rm --privileged \
-v $PWD:/root/.torch/models \
-v $PWD/avatars:/app/avatarify/avatars \
--env="DISPLAY" \
--env="QT_X11_NO_MITSHM=1" \
--volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \
avatarify python3 afy/cam_fomm.py \
--config $FOMM_CONFIG \
--checkpoint $FOMM_CKPT \
--virt-cam $CAMID_VIRT \
--relative \
--adapt_scale \
$@
xhost -local:root

else
docker run $DOCKER_ARGS -it --rm --privileged \
-v $PWD:/root/.torch/models \
-v $PWD/avatars:/app/avatarify/avatars \
avatarify python3 afy/cam_fomm.py \
--config $FOMM_CONFIG \
--checkpoint $FOMM_CKPT \
--virt-cam $CAMID_VIRT \
--relative \
--adapt_scale \
$@
fi


fi
9 changes: 9 additions & 0 deletions scripts/install_docker.sh
@@ -0,0 +1,9 @@
if [[ ! $@ =~ "no-vcam" ]]; then
rm -rf v4l2loopback 2> /dev/null
git clone https://github.com/umlaeute/v4l2loopback
echo "--- Installing v4l2loopback (sudo privelege required)"
cd v4l2loopback
make && sudo make install
sudo depmod -a
cd ..
fi