Real time interactive streaming digital human
-
Updated
Jun 22, 2024 - Python
Real time interactive streaming digital human
(Windows/Linux) Local WebUI with neural network models (LLM, Stable Diffusion, AudioCraft, AudioLDM2, TTS, Bark, Whisper, Demucs, LibreTranslate, ZeroScope2, TripoSR, Shap-E, GLIGEN, Wav2Lip, Roop, Rembg, CodeFormer, Moondream 2) on python (In Gradio interface)
Wav2Lip UHQ extension for Automatic1111
PaddlePaddle GAN library, including lots of interesting applications like First-Order motion transfer, Wav2Lip, picture repair, image editing, photo2cartoon, image style transfer, GPEN, and so on.
Orchestrating AI for stunning lip-synced videos. Effortless workflow, exceptional results, all in one place.
GUI to sync video mouth movements to match audio, utilizing wav2lip-hq. Completed as part of a technical interview.
PyTorch Implementation for Paper "Emotionally Enhanced Talking Face Generation" (ICCVW'23 and ACM-MMW'23)
This project is dedicated to advancing the field of animatronic robots by enabling them to generate lifelike facial expressions, pushing the boundaries of what's possible in human-robot interaction.
AIStreameur : Faite streamer vos personne préféré !
The LipSync-Wav2Lip-Project repository is a comprehensive solution for achieving lip synchronization in videos using the Wav2Lip deep learning model. This open-source project includes code that enables users to seamlessly synchronize lip movements with audio tracks.
Wav2Lip UHQ Improvement with ControlNet 1.1
Lip Synchronization (Wav2Lip).
This repository hosts the code used by Apollo during Wav2Lip's inference process.
IN4U - 면접 연습 웹 서비스
Add a description, image, and links to the wav2lip topic page so that developers can more easily learn about it.
To associate your repository with the wav2lip topic, visit your repo's landing page and select "manage topics."