Skip to content
No description, website, or topics provided.
Branch: master
Clone or download
Yuan Bo Yuan Bo
Yuan Bo and Yuan Bo Init
Latest commit 3463228 Nov 19, 2019
Type Name Latest commit message Commit time
Failed to load latest commit information.
support Init Nov 19, 2019
LICENSE Init Nov 19, 2019

Learning an Intrinsic Garment Space for Interactive Authoring of Garment Animation

This is the demo code for training a motion invariant encoding network. The following diagram provides an overview of the network structure.

For more information, please visit


The project's directory is shown as follows. The data set is in the data_set folder, including cloth mesh(generated by Maya Qualoth), garment template, character animation and skeletons. Some supporting files can be found in support. The code is in the training folder. The shape feature descriptor and motion invariant encoding network are saved in nnet.

│  ├─anim
│  ├─case
│  ├─garment
│  └─skeleton
│  ├─basis
│  └─mie
│  ├─eval_basis
│  ├─eval_mie
│  ├─info_basis
│  └─info_mie

In the training folder, there are several python scripts which implement the training process. We also provide a data set for testing, generated from a sequence of dancing animation and a skirt.


You can run these scripts from 01 to 05, after that, there will be a *.net file. It is the shape feature descriptor. If the loss of the descriptor is low enough, you can run 06, 07, 08 scripts to get the motion Invariant encoding network. In addition, 051 and 081 scripts are used for evaluation. If everything goes well, the exported mesh would be like the following figures.

(01 script is used to split *.npy file and it is optional)

The result of a specific frame after running 051 script. The yellow skirt is our output and the blue one is the ground truth.


For the output from 081 script is painted by red and the green one is the ground truth.


You can’t perform that action at this time.