Skip to content

Use virtual avater for any conference that uses camera.

License

Notifications You must be signed in to change notification settings

itemx/EasyVtuber-IX

 
 

Repository files navigation

EasyVtuber

Modified items in this fork.

  1. Add options to change background for easier mix to OBS and other capture tools.
  2. Add options to directly change pose vector parameters.

Sample:

python main.py --character amelria --debug --bgcolor green --posefix_x -2 --posefix_y -0.15 --posefix_z 1.55

Original readme

  • Character Face Generation using Facial landmarks and GANs
  • Chat with your own webtoons and cartoon characters on Google Meets, Zoom, etc!
  • It works great no matter how many accessories you add!
  • Unfortunately, it may not work well in real time under RTX 2070.



Demo



Requirements

  • Python >= 3.8
  • Pytorch >= 1.7
  • pyvirtualcam
  • mediapipe
  • opencv-python



Quick Start

  • ※ This project requires OBS installation before use
  • Please follow the installation order below !
  1. Install OBS studio

    • To use OBS virtualcam, you must install OBS Studio first.
  2. pip install -r requirements.txt

    • OBS virtualcam must be installed to use pyvirtualcam included in the requirements.
  3. Download pretrianed model

    • This model is provided by the original talking-head-anime-2
    • Put the following files in the pretrained folder.
      • combiner.pt
      • eyebrow_decomposer.pt
      • eyebrow_morphing_combiner.pt
      • face_morpher.pt
      • two_algo_face_rotator.pt
  4. Put the character image in the character folder

    • The character image files must meet the following requirements:
      • Must include an alpha channel (as png extension)
      • Must contains only 1 humanoid character
      • The character must faceing the front side
      • The character's head should fit within the center 128 x 128 pixel (because it resizes to 256 x 256 by default, it must fit within 128x128 based on 256 x 256)

    Example image is refenced by TalkingHeadAnime2

5.python main.py --webcam_output

  • If you want to see how the actual facial features are captured --debug, add an option and run it.



How to make Custom Character

1.Find the character you want on search engines.

  • The image should satisfy the requirements above. google search

  1. Crop the image to the aspect ratio 1:1 so that the character's face is in the center.
  2. Remove the background and create an alpha channel.
  3. Done!
    • Put the image in the character folder and execute python main.py --output_webcam --character (filename only, without ".png")



Folder Structure

      │
      ├── character/ - character images 
      ├── pretrained/ - save pretrained models 
      ├── tha2/ - Talking Head Anime2 Library source files 
      ├── facial_points.py - facial feature point constants
      ├── main.py - main script to excute
      ├── models.py - GAN models defined
      ├── pose.py - process facial landmark to pose vector
      └── utils.py - util fuctions for pre/postprocessing image



Usage

Sending to virtual webcam

  • python main.py --output_webcam

Choose to use a specific character

  • python main.py --character (File name under character folder without ".png" extension)

Check facial features

  • python main.py --debug

Video file inference

  • python main.py --input video_file_path --output_dir frame_direct_to_save



TODOs

Please refer to the original branch.

Thanks to

(No Translate)



Acknowledgements

  • EasyVtuber is based on TalkingHeadAnime2
  • For the source of the tha2 folder and the pretrained model file, please check the license of the original author's repo before using it.

About

Use virtual avater for any conference that uses camera.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%