A Semi-Supervised Data Augmentation Approach using 3D Graphical Engines (ECCVW2018)
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
images
matFiles
sample
GenDataFromDesFunV4.py
README.md
execute.bash
genDescFromRRv3.py
genDescFromUni_v2.py
main.bash
setPoseFromR_tar.py
utilsGenPose.py

README.md

ScanAva Generation Toolkit

This is the code for the following paper:

Shangjun Liu, Sarah Ostadabbas, A Semi-Supervised Data Augmentation Approach using 3D Graphical Engines, ECCV 2018.

ScanAva procedure sample data

Contact: Shuangjun Liu, Sarah Ostadabbas

Contents

1. Requirement

  • Install blender 2.79. Tested on 2.79, higher version should also work.

  • Install SciPy for blender python Interpreter. You can do this by install same version of python then copy the package to blender packages folder blenderFd/python/lib/python3.2/site-packages. Or you can point to the scipy py3.5 package path in the code.

  • DownLoad the LSUN dataset, living room part. Set the folder path inside code GenDataFromDesFunV4.py . You can use any background jpg images for this. For example, '/home/jun/datasets/lsun/living_room_tr_img/'

2. Human scan

We employ the Kinect v1 for 3d with a rotator. You can use the device you like for the scan.

3. Human rigging

We provide a template in the code under samples. Open the blender file, delete the mesh but the skeleton. Import the scan you want. Fit the skeleton to the new pose. Then use the automatically weight functionality to rigging the skeleton to mesh. (Please refer to the blender totorial for rigging part)

4. Dataset generation

Put the code and blender files in one folder if you use default relative path.

"main.bash": Edit ‘bldLs’ to include the blender files you wish to generate data-sets for.

“execute.bash” You can change the file being ran to either “genDescFromRRv3.py” or “genDescFromUni_v2.py”, depending on if you want the poses to come from RR or from uniform and independent random joint angles “genDescFromRRv3.py” or “genDescFromUni_v2.py”: All necessary parameters to edit are in the “USER PARAMS” section at the top of the code.

“blender_folder_path”: Set this to the location of the “augmentation_code” folder of yours in the Discovery Cluster i.e.: ‘/home/sehgal.n/augmentation_code’

“py35_package_path”: Set this to the location of the site packages for your py35 installation of miniconda (as discussed in previous section) i.e.: ‘/home/sehgal.n/miniconda3/envs/py35/lib/python3.5/site-packages’ “degree”: Default is 0. You can also set degree=35 for the security camera viewpoint. “Npose”: Default is 2000. You can modify as desired.

Default dataset is generated inside current folder. You can specify this by setting the dsFd parameters in “genDescFromRRv3.py

5. Train and test human pose estimation model

We test our generation method with state of art hourglass 2d human pose estimation model. If you want to train it, please download our generated ScanAva dataset and test against our corresponding real world 2d image set AC2d

We provide pretrained models trained on seven people and also on single person without/with white noise or gaussian filter. For seven people version, we test it agains the corresponding real 2d images of same persons we synthesize from.

For single person version, we trained on synthetic people and test agains same people with clothes never appear in synthetic trainning set to test the generalization ability against specific person.

We also provide test result from model trained by 10000 SURREAL samples and also the pretrained hg model provided by original work "umich-stacked-hourglass"

For adaptation version, the test set should be preprocessed by the same adaptation. For example, gaussian filter should be applied to test set when using model trained with gaussian filter.

where wn stands for white noise, and gauFt stands for gaussian filter.

In final_preds.h5 file, the prediction result "preds", ground truth "joints_gt" and also the torso length 'lenTorso' are provided. So you can generate PCK from these data.

Citation

If you use this code, please cite the following:

@article{liu2018semi,
  title={A Semi-Supervised Data Augmentation Approach using 3D Graphical Engines},
  author={Liu, Shuangjun and Ostadabbas, Sarah},
  journal={9th International Workshop on Human Behavior Understanding: at ECCV’18, arXiv preprint arXiv:1808.02595},
  year={2018}
}

License

  • This code is for non-commertial purpose only. For other uses please contact ACLab of NEU.
  • No maintainence survice