Modified items in this fork.
- Add options to change background for easier mix to OBS and other capture tools.
- Add options to directly change pose vector parameters.
Sample:
python main.py --character amelria --debug --bgcolor green --posefix_x -2 --posefix_y -0.15 --posefix_z 1.55
- Character Face Generation using Facial landmarks and GANs
- Chat with your own webtoons and cartoon characters on Google Meets, Zoom, etc!
- It works great no matter how many accessories you add!
- Unfortunately, it may not work well in real time under RTX 2070.
- Python >= 3.8
- Pytorch >= 1.7
- pyvirtualcam
- mediapipe
- opencv-python
- ※ This project requires OBS installation before use
- Please follow the installation order below !
-
- To use OBS virtualcam, you must install OBS Studio first.
-
pip install -r requirements.txt
- OBS virtualcam must be installed to use pyvirtualcam included in the requirements.
-
- This model is provided by the original talking-head-anime-2
- Put the following files in the pretrained folder.
combiner.pt
eyebrow_decomposer.pt
eyebrow_morphing_combiner.pt
face_morpher.pt
two_algo_face_rotator.pt
-
Put the character image in the character folder
- The character image files must meet the following requirements:
- Must include an alpha channel (as png extension)
- Must contains only 1 humanoid character
- The character must faceing the front side
- The character's head should fit within the center 128 x 128 pixel (because it resizes to 256 x 256 by default, it must fit within 128x128 based on 256 x 256)
- The character image files must meet the following requirements:
5.python main.py --webcam_output
- If you want to see how the actual facial features are captured --debug, add an option and run it.
1.Find the character you want on search engines.
- Crop the image to the aspect ratio 1:1 so that the character's face is in the center.
- Image cropping site This is not an ad.
- Image cropping site This is not an ad.
- Remove the background and create an alpha channel.
- Background removal This is not an ad.
- Background removal This is not an ad.
- Done!
- Put the image in the character folder and execute
python main.py --output_webcam --character (filename only, without ".png")
- Put the image in the character folder and execute
│
├── character/ - character images
├── pretrained/ - save pretrained models
├── tha2/ - Talking Head Anime2 Library source files
├── facial_points.py - facial feature point constants
├── main.py - main script to excute
├── models.py - GAN models defined
├── pose.py - process facial landmark to pose vector
└── utils.py - util fuctions for pre/postprocessing image
python main.py --output_webcam
python main.py --character (File name under character folder without ".png" extension)
python main.py --debug
python main.py --input video_file_path --output_dir frame_direct_to_save
Please refer to the original branch.
(No Translate)
이루다
이미지 사용을 허락해주신 스캐터랩 이루다팀,똘순이 MK1
이미지 사용을 허락해주신 순수한 불순물 님, 늦은 밤까지 README 샘플 영상 만들기 위해 도와주신 성민석 멘토님, 박성호, 박범수 캠퍼님, 프로젝트 방향성 조언을 해주신 김보찬 멘토님 모두 감사합니다!
- EasyVtuber is based on TalkingHeadAnime2
- For the source of the tha2 folder and the pretrained model file, please check the license of the original author's repo before using it.