This is the repo for GAPS: Geometry-Aware, Physics-Based, Self-Supervised Neural Garment Draping
Recent neural, physics-based modeling of garment deformations allows faster and visually aesthetic results as opposed to the existing methods. Material-specific parameters are used by the formulation to control the garment inextensibility. This delivers unrealistic results with physically implausible stretching. Oftentimes, the draped garment is pushed inside the body which is either corrected by an expensive post-processing, thus adding to further inconsistent stretching; or by deploying a separate training regime for each body type, restricting its scalability. Additionally, the flawed skinning process deployed by existing methods produces incorrect results on loose garments.
In this paper, we introduce a geometrical constraint to the existing formulation that is collision-aware and imposes garment inextensibility wherever possible. Thus, we obtain realistic results where draped clothes stretch only while covering bigger body regions. Furthermore, we propose a geometry-aware garment skinning method by defining a body-garment closeness measure which works for all garment types, especially the loose ones.
Requirements: python3.8
, tensorflow-2.10
, numpy-1.23.0
, scipy-1.10.1
Create conda virtual environment (Recommended):
conda create -n gaps python=3.8
conda activate gaps
python -m pip install -r requirements.txt
- Sign in into https://smpl.is.tue.mpg.de
- Download SMPL version 1.0.0 for Python 2.7 (10 shape PCs)
- Extract
SMPL_python_v.1.0.0.zip
and copysmpl/models/basicModel_f_lbs_10_207_0_v1.0.0.pkl
inassets/SMPL
We use sequences from AMASS to test our model. To download the sequences follow these steps:
- Sign in into https://amass.is.tue.mpg.de
- Download the body data for the CMU motions (SMPL+H G)
- Extract
CMU.tar.bz2
inassets/CMU
:
python -u train_gaps.py --config config/train.ini
To generate garment meshes given a motion sequence, change the settings in file eval.ini and run:
python run_sequences.py
train.ini
and eval.ini
contains training and prediction configuration, respectively.
If you find our work useful, please cite it as:
@inproceedings{rc2024gaps,
title = {GAPS: Geometry-Aware, Physics-Based, Self-Supervised Neural Garment Draping},
author = {Chen, Ruochen and Parashar, Shaifali and Chen, Liming},
booktitle = {International Conference on 3D Vision (3DV)},
year = {2024}
}