Skip to content

yyvhang/lemon_3d

Repository files navigation

Website Badge arXiv

LEMON: Learning 3D Human-Object Interaction Relation from 2D Images (CVPR2024)

PyTorch implementation of LEMON: Learning 3D Human-Object Interaction Relation from 2D Images. The repository will gradually release training, evaluation, inference codes, pre-trained models and 3DIR dataset.

📖 To Do List

    • release the pretrained LEMON.
    • release the inference, training and evaluation code.
    • release the 3DIR dataset.

📋 Table of content

  1. ❗ Overview
  2. 💡 Requirements
  3. 📖 Dataset
  4. ✏️ Usage
    1. Environment
    2. Demo
    3. Training
    4. Evaluation
  5. ✉️ Statement
  6. 🔍 Citation

❗Overview

LEMON seek to parse the 3D HOI elements through 2D images:


💡Requirements

(1) Download the SMPL-H used in AMASS project, put them under the folder smpl_models/smplh/.
(2) Download the smpl_neutral_geodesic_dist.npy and put it under the folder smpl_models/, this is used to compute the metrics geo.
(3) Download the pre-trained HRNet, put .pth file under the folder tools/models/hrnet/config/hrnet/.
(4) Download the pre-trained LEMON (DGCNN as backbone), put .pt files under the folder checkpoints/, we release checkpoints with and without curvatures.

📖Dataset


The 3DIR dataset includes the following data:
(1) HOI images with human and object masks.
(2) Dense 3D human contact annotation.
(3) Dense 3D object affordance annotation.
(4) Pesudo-SMPLH parameters.
(5) Annotation of the Human-Object spatial relation.

Download the 3DIR dataset from Google Drive or Baidu Pan (key: 3DIR). Please refer to Data/DATA.md for more details of 3DIR.

✏️ Usage

Environment

First clone this respository and create a conda environment, as follows:

git clone https://github.com/yyvhang/lemon_3d.git
cd lemon_3d
conda create -n lemon python=3.9 -y
conda activate lemon
#install pytorch 2.0.1
conda install pytorch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 pytorch-cuda=11.8 -c pytorch -c nvidia

Then, install the other dependancies:

pip install -r requirements.txt

Demo

The following command will run LEMON on a hoi pair, if you want to infer without curvature, please modify the parameter at config/infer.yaml, you should change the checkpoint path and set curvature to False.

python inference.py --outdir Demo/output

For the visualization in main paper, we use Blender to render the human and proxy sphere, and refer to IAG-Net for the object visualization.
Note: If you use the model with curvature, you should obtain curvatures for the human and object geometry. For convenience, we recommend using CloudCompare or trimesh.curvature for calculation. After testing, LEMON could work well with the curvature calculated through these methods.

Training

If you want to train LEMON, please run the following command, you could modify the parameter at config/train.yaml.

bash train.sh

Evaluation

Run the following command to evaluate the model, you could see the setting at config/eval.yaml.

python eval.py --yaml config/eval.yaml

If you take LEMON as a comparative baseline, please indicate whether to use curvature.

✉️ Statement

This project is for research purpose only, please contact us for the licence of commercial use. For any other questions please contact yyuhang@mail.ustc.edu.cn.

🔍 Citation

@article{yang2023lemon,
  title={LEMON: Learning 3D Human-Object Interaction Relation from 2D Images},
  author={Yang, Yuhang and Zhai, Wei and Luo, Hongchen and Cao, Yang and Zha, Zheng-Jun},
  journal={arXiv preprint arXiv:2312.08963},
  year={2023}
}