English Version | 中文版本
跨平台的实时视频驱动动作捕捉及3D虚拟形象生成系统 for VTuber/Live/AR/VR.
提供用于Windows,macOS的可执行文件包(包括M系列芯片),可在Linux上通过源代码运行
立刻下载 (压缩包,无需安装)
(这是一个多语言软件,支持中文和英文)
本科毕业设计作品。
🌟 好看的用户图形界面(得益于Material Design 3自动取色系统),且支持深色模式
🌟 简单易用,只需拖拽即可导入虚拟形象模型
add-model-drag.mp4
🌟 动作转发系统支持支持WebXR API (HTTPS only,用于VR和AR技术)
webxr-ar-demo.mp4
🌟 带有骨骼控制器和变装工具的模型查看器
🌟 可导入至OBS进行直播使用
🌟 支持全身动作捕捉
🌟 支持自动检测骨骼类型并完成映射( for All VRM files and Mixamo Format FBX files)
🌟 支持通过手动进行骨骼映射来驱动各种骨骼类型FBX、GLB、GLTF模型文件
🌟 无需独立显卡,甚至在八年前的老电脑上都能流畅使用 (i7-4790k/GTX770/16G RAM)
🌟 感谢 Mediapipe and Kalidokit 提供技术支持,基于Web 技术开发
🌟 面部
🌟 半身
🌟 半身与手部
🌟 全身
针对macOS用户的额外说明:
-
You need set Gatekeeper to Anywhere in System Settings (在终端中执行
sudo spctl --master-disable
) -
如果你遇到
“SysMocap” is damaged and can’t be opened. You should move it to the Trash.
(大概中文是 被损坏 您应该移动到废纸篓), 请在终端中执行sudo xattr -r -d com.apple.quarantine /Applications/SysMocap.app
git clone https://github.com/xianfei/SysMocap.git
cd SysMocap
npm i
npm start
- 如有请在issue中告知
- HTTP & HTTPS 在动作捕捉转发中将会使用同一个端口。
(对于其他类型的骨骼,你可以在本程序中进行手动映射和坐标系转换)
-
Hips (Main Node, both Position and Rotation. Ratation only for other nodes)
-
Neck
-
Chest
-
Spine
-
RightUpperArm
-
RightLowerArm
-
LeftUpperArm
-
LeftLowerArm
-
LeftUpperLeg
-
LeftLowerLeg
-
RightUpperLeg
-
RightLowerLeg
-
Settings page and global settings utils
-
Add play/pause button and progress bar when mocap from video
-
Support bones binding for glTF/glb
-
Support rendering glTF/glb model
-
Support binding when bones' name is non-uniformed
-
Model library add user's custom 3D model
-
Live plug-in / interface for Open Broadcast Software
-
Output video ( using such as libffmpeg ) -
Support per-frame rendering without drop frame -
Support c-s architecture for online video mocap ( on cloud ) -
Support Material Designed 3 Color System (color picking)
-
Mocap data forwarding via network
-
Adapt for Linux and macOS
BibTeX:
@INPROCEEDINGS{9974484,
author={Song, Wenfeng and Wang, Xianfei and Gao, Yang and Hao, Aimin and Hou, Xia},
booktitle={2022 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)},
title={Real-time Expressive Avatar Animation Generation based on Monocular Videos},
year={2022},
volume={},
number={},
pages={429-434},
doi={10.1109/ISMAR-Adjunct57072.2022.00092}}
GB/T 7714 (国内高校毕业论文写这个就行)
Song W, Wang X, Gao Y, et al. Real-time Expressive Avatar Animation Generation based on Monocular Videos[C]//2022 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). IEEE Computer Society, 2022: 429-434.