You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Many thanks for sharing such an amazing work. Without training, would it be possible to test this tool on 3D scans of unseen subjects in the training? The training data consisting of pairs of (raw scans, motion parameters) is not straightforward to obtain for a given subject.
Thanks!
The text was updated successfully, but these errors were encountered:
Hi,
Thanks for your interest in our work!
Unfortunately, this work does not generalize to unseen subjects. It only generalizes to unseen poses of the training subjects.
If you are interested, POP or MetaAvatar generalize to unseen subjects to certain extent.
About the paired training data, once you have raw scan sequences, body motion parameters are not that difficult to obtain. 1) Render the raw scans in different views, 2) run 2D or 3D human pose estimation algorithms on these images and 3) fuse these results, you'll get a basic motion parameter estimation.
Indeed, this could be feasible if you have sequences of raw scans, however I was hoping to find a way that animates raw 3d scan while having only a static scan. I will have a look at the two mentioned works. Thanks a lot for the hints!
Dear authors,
Many thanks for sharing such an amazing work. Without training, would it be possible to test this tool on 3D scans of unseen subjects in the training? The training data consisting of pairs of (raw scans, motion parameters) is not straightforward to obtain for a given subject.
Thanks!
The text was updated successfully, but these errors were encountered: