Replies: 2 comments 1 reply
-
Thank you for your interest in the project. Q: Since the render pipeline is taking blendshapes, is it operating on Live2D like standards? Q: "just wasn't quite understanding how what the pose data you get from ifacial ends up looking like." Q: At what method can I just swap feeding in raw blendshape data from my own local service and everything should operate normally? This is where it is used in ifacialmocap_puppeteer: https://github.com/pkhungurn/talking-head-anime-3-demo/blob/main/tha3/app/ifacialmocap_puppeteer.py#L327, and this is where it is used in the manual_poser: https://github.com/pkhungurn/talking-head-anime-3-demo/blob/main/tha3/app/manual_poser.py#L387 From the manual_poser, you can backtrack and see what the pose vector look like. From the ifacialmocap_puppeteer, you can backtrack to see how a pose vector is derived from inputs from iFacialMocap. This of course uses IFacialMocapPoserConverter25 (https://github.com/pkhungurn/talking-head-anime-3-demo/blob/main/tha3/mocap/ifacialmocap_poser_converter_25.py#L82) at some point. Q: Also where in the code are the various generated neural network outputs layered on top of each other for avatar creation? |
Beta Was this translation helpful? Give feedback.
-
Thanks for the help, if you'd enable discussions for this repo and move this to discussion I think there might be useful stuff as people come across the project they'll contribute and I'll probably continue to pose questions and results down the line. |
Beta Was this translation helpful? Give feedback.
-
This is a pretty cool project, I just was wondering if you could describe the interface for the motion capture a bit for people who want to build off off it. I'm playing around with other ways of parsing 3d data such as mediapipe with https://github.com/yeemachine/kalidokit to be able to parse on desktop.
Since the render pipeline is taking blendshapes, is it operating on Live2D like standards? I've looked at ifacialmocap_puppeteer.py and ifacialposeconverter25 a bit, and just wasn't quite understanding how what the pose data you get from ifacial ends up looking like. At what method can I just swap feeding in raw blendshape data from my own local service and everything should operate normally? Are there any other auxiliary things from ifacial that are important I need to also include for the GUI to work?
Also where in the code are the various generated neural network outputs layered on top of each other for avatar creation?
Beta Was this translation helpful? Give feedback.
All reactions