Yisheng He*, Xiaodong Gu*, Xiaodan Ye, Chao Xu, Zhengyi Zhao, Yuan Dong†, Weihao Yuan†, Zilong Dong, Liefeng Bo
如果您熟悉中文,可以阅读我们中文版本的文档
- Ultra-realistic 3D Avatar Creation from One Image in Seconds
- Super-fast Cross-platform Animating and Rendering on Any Devices
- Low-latency SDK for Realtime Interactive Chatting Avatar
Chat_Demo.mp4
[April 21, 2025] We have released the WebGL Interactive Chatting Avatar SDK on OpenAvatarChat (including LLM, ASR, TTS, Avatar), with which you can freely chat with the 3D Digital Human generated by LAM ! 🔥
[April 19, 2025] We have released the Audio2Expression model, which can animate the generated LAM Avatar with audio input ! 🔥
[April 10, 2025] We have released the demo on ModelScope Space !
- Release LAM-small trained on VFHQ and Nersemble.
- Release Huggingface space.
- Release Modelscope space.
- Release LAM-large trained on a self-constructed large dataset.
- Release WebGL Render for cross-platform animation and rendering.
- Release audio driven model: Audio2Expression.
- Release Interactive Chatting Avatar SDK with OpenAvatarChat, including LLM, ASR, TTS, Avatar.
git clone https://github.com/aigc3d/LAM.git
cd LAM
# Install with Cuda 12.1
sh ./scripts/install/install_cu121.sh
# Or Install with Cuda 11.8
sh ./scripts/install/install_cu118.sh
Model | Training Data | HuggingFace | ModelScope | Reconstruction Time | A100 (A & R) | XiaoMi 14 Phone (A & R) |
---|---|---|---|---|---|---|
LAM-20K | VFHQ | TBD | TBD | 1.4 s | 562.9FPS | 110+FPS |
LAM-20K | VFHQ + NeRSemble | Link | Link | 1.4 s | 562.9FPS | 110+FPS |
LAM-20K | Our large dataset | TBD | TBD | 1.4 s | 562.9FPS | 110+FPS |
(A & R: Animating & Rendering )
# Download Assets
huggingface-cli download 3DAIGC/LAM-assets --local-dir ./tmp
tar -xf ./tmp/LAM_assets.tar && rm ./tmp/LAM_assets.tar
tar -xf ./tmp/thirdparty_models.tar && rm -r ./tmp/
# Download Model Weights
huggingface-cli download 3DAIGC/LAM-20K --local-dir ./model_zoo/lam_models/releases/lam/lam-20k/step_045500/
pip3 install modelscope
# Download Assets
modelscope download --model "Damo_XR_Lab/LAM-assets" --local_dir "./tmp/"
tar -xf ./tmp/LAM_assets.tar && rm ./tmp/LAM_assets.tar
tar -xf ./tmp/thirdparty_models.tar && rm -r ./tmp/
# Download Model Weights
modelscope download "Damo_XR_Lab/LAM-20K" --local_dir "./model_zoo/lam_models/releases/lam/lam-20k/step_045500/"
python app_lam.py
sh ./scripts/inference.sh ${CONFIG} ${MODEL_NAME} ${IMAGE_PATH_OR_FOLDER} ${MOTION_SEQ}
This work is built on many amazing research works and open-source projects:
Thanks for their excellent works and great contribution.
Welcome to follow our other interesting works:
@inproceedings{he2025LAM,
title={LAM: Large Avatar Model for One-shot Animatable Gaussian Head},
author={
Yisheng He and Xiaodong Gu and Xiaodan Ye and Chao Xu and Zhengyi Zhao and Yuan Dong and Weihao Yuan and Zilong Dong and Liefeng Bo
},
booktitle={arXiv preprint arXiv:2502.17796},
year={2025}
}