Ever wanted to have cool and unique filters for your video call? You found it!
This repository provides you a virtual linux webcam* which applies an artistic neural style transfer to your webcam
video.
It can be used like any other webcam in all types of video conferencing tools, such as zoom, skype, discord, teams... .
Own styles trained with the code provided
by artistic neural style transfer
can be used, too.
Automatic GPU dependent TensorRT optimization is applied to achieve high frame rates.
*Only tested with Ubuntu 18.04 and 20.04 so far.
(an installation tutorial withouth using docker is given at the end of the document)
- Have a good nvidia graphics card with a driver of version 465.31 or newer installed.
Older driver versions do sometimes work but not all of them.
With a Geforce 2080TI I could achieve 24 fps for the artistic style transfer with a resolution of 1280x720. - Have Ubuntu 18.04 or 20.04 installed (it likely works also for other linux distributions but I have not tested it, yet.)
-
Install Docker
curl https://get.docker.com | sh && sudo systemctl --now enable docker
Install Nvidia Docker
Install docker-composesudo curl -L "https://github.com/docker/compose/releases/download/1.29.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose && sudo chmod +x /usr/local/bin/docker-compose
.
Add your current user to the docker group:sudo groupadd docker && usermod -aG docker $USER
. Than log out and log back. -
Download the style models.
Extract the file and copy the folderstyle_transfer_models
to./data
. -
Set
VIDEO_INPUT
in.docker/docker_compose_nvidia.yml
to your webcam device (defaults to /dev/video0).
Usev4l2-ctl --list-devices
to find your device. Consider also to adapt the environment variables in this file. -
Change to docker dir
cd *path to repository*/docker/
rundocker-compose -f docker-compose-nvidia.yml build
-
(if your linux got updated to a newer kernel at some later point, you have to run
docker-compose -f docker-compose-nvidia.yml build --no-cache
to get the script working again.)
- Change to docker dir
cd *path to repository*/docker/
. - For artistic style transfer:
docker-compose -f docker-compose-nvidia.yml run stylecam
. You might have to start it a second time when it does not find/dev/video13
.
Starting the program the first time will take several minutes, since the networks are optimized to your gpu.
If you encounter an out of memory error during this optimization, just restart. If you encounter an error concerning permissions for /dev/video12 or /dev/video13 runsudo chmod 777 /dev/video1*
- The new webcam device is
/dev/video12
. Test it withfflpay /dev/video12
.
- Stop the face program with
strg c
. - If your real webcam input is now very slow just restart the system. (I'm working on a better solution.)
Enter 1+BACKSPACE to deactivate and activate styling
The program can iterate over all styles provided in the artistic style tansfer model dir (-s) and in corresponding
subdirs.
Enter 2+BACKSPACE to load the previous style
Enter 3+BACKSPACE to load the next style
Some style models achieve better results if the styled image is smaller or larger. This does not change the video output
size.
Enter 4+BACKSPACE to decrease the scale factor of the model input. This will increase the frame rate.
Enter 5+BACKSPACE to increase the scale factor of the model input This will decrease the frame rate.
Enter 6+BACKSPACE to decrease the noise suppression factor. This might lead to annoying noise.
Enter 7+BACKSPACE to increase the noise suppression factor. This might lead to blurred faces.
Press CTRL-c to exit
Put additional artistic style tansfer models in the directory provided with the -s flag (defaults to ./data/style_transfer_models) You can train own styles with the code provided by artistic neural style transfer.
This work builds upon:
hipersayanX's akvcam
fangfufu's Linux-Fake-Background-Webcam
Leon Gatys et. al.'s
and Justin Johnson et. al.'s artistic neural style transfer
The neural style transfer programming project team of the summer semester 2020.
Many thanks for your contributions.
To support this project, you can make a donation to its current maintainer:
- The akvcam has to be installed. Please,
follow their wiki to install it.
In contrast to their documentation, for (Ubuntu 18.04) the driver is located at:
lib/modules/$(uname -r)/updates/dkms/akvcam.ko
- Copy the akvcam configuration files:
sudo mkdir -p /etc/akvcam
cp akvcam_config/* /etc/akvcam/
The akvcam output device is now located at/dev/video3
(this is the one you have to provide to the fakecam scipt)
The akvcam capture device is now located at/dev/video2
(This is the one you have to choose in the software that displays your webcam video ) - Have a good graphics card with driver version >465.31 installed. With a Geforce 2080TI we could achieve 24 fps for the artistic style tansfer a with a resolution of 1280x720
- Install the cuda libraries with version >= cuda 11.0 installed.
- Install tensorrt python wheels with version > 8.0.0.3
- Install python packages given in the requirements.txt.
- Download the style models.
Extract the file and copy the folders to./data
.
-
make sure the gpu driver is loaded:
sudo modprobe videodev
-
load the akvcam driver:
sudo insmod /lib/modules/$(uname -r)/updates/dkms/akvcam.ko
-
run the facecam program:
python3 src/main.py -w /dev/video1 -v /dev/video3
-w is the path to the real webcam device (you might have to adapt this one).
-v is the path to the virtual akvcam output device.
use --help to see further options.
-
stop the face program with
strg c
-
unload the akvcam driver
sudo rmmod lib/modules/$(uname -r)/updates/dkms/akvcam.ko