Skip to content

A real-time streaming conversational video system that transforms text interactions into continuous, high-fidelity video responses using autoregressive diffusion.

License

Notifications You must be signed in to change notification settings

zai-org/RealVideo

Repository files navigation

RealVideo

RealVideo is a WebSocket-based video calling system that supports text input. It leverages GLM-4.5-AirX and GLM-TTS models to generate audio responses and utilizes autoregressive diffusion to generate corresponding video frames. The system features a modular design with full functionality and a clean code structure. Visit blog here!

Example Video

demo1.mp4
demo2.mp4
demo4.mp4

Features

  • Text Input: Supports text message input.
  • AI Voice Response: Integrates GLM-4.5-AirX and GLM-TTS models to generate voice responses.
  • Lip Sync: Generates real-time conversational video based on any input image and audio.
  • Real-time Communication: WebSocket-based real-time bidirectional communication.

Download

Model Download Links
RealVideo 🤗 Hugging Face
🤖 ModelScope

Quick Start

1. Requirements

  • Python 3.10 - 3.12
  • pip3
  • Modern browser (supporting WebSocket and Web Audio API)

2. Install Dependencies

pip3 install -r requirements.txt
huggingface-cli download Wan-AI/Wan2.2-S2V-14B --local-dir-use-symlinks False --local-dir wan_models/Wan2.2-S2V-14B

3. Configure API Key

Before using, please set the ZAI API key:

export ZAI_API_KEY="your_actual_api_key_here"

and change config/config.py line:

PATH_TO_YOUR_MODEL = "zai-org/RealVideo/model.pt"  # Replace with your model path

4. Start the Service

Specify the number of GPUs you wish to use and run the startup script, at least 2 GPUs (per 80GB, such as H100, H200).

For example:

CUDA_VISIBLE_DEVICES=0,1 bash ./scripts/run_app.sh

One GPU will be used for the VAE service, while the remaining GPUs will be automatically allocated for parallel computation of the DiT service.

The table below shows reference times (in ms) for DiT to generate one block. If the time is within 500ms, smooth real-time generation can be achieved. Numbers in parentheses indicate the time taken with compilation enabled.

DiT sp size / Denoising steps 2 4
1 563.84 ms (442.61 ms) 943.13 ms (723.06 ms)
2 384.86 ms 655.92 ms (527.11 ms)
4 306.39 ms 513.72 ms (480.68 ms)

5. Access the Application

Usage Instructions

  1. Set Avatar and Voice: Use the file upload button to upload an image to set the avatar, or upload a speech audio file longer than 3 seconds for voice cloning.
  2. Connect WebSocket: Click the "Connect" button to establish the WebSocket connection.
  3. Text Input: Enter a message in the text box and press Enter or click "Send" to send the message.
  4. Real-time Response: The real-time generated video response will be displayed on the left.

Technical Highlights

  • Model Integration: Allows for convenient and quick voice cloning, taking text input to generate audio output.
  • Modular Design: Clear code structure, easy to maintain and extend.
  • Real-time Performance: Optimized audio processing and real-time video generation algorithms.

Acknowledgements

This project utilizes the following open-source libraries:

About

A real-time streaming conversational video system that transforms text interactions into continuous, high-fidelity video responses using autoregressive diffusion.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published