This repo accompanies the paper "PredNet: a simple Human Motion Prediction Network for Human-Robot Interaction", to be published in the ETFA 2021.
- Installation
- Datasets
- Repeating experiments from the paper
- Dependencies
- Cite
- License
- Acknowledgement
- Install Mujoco 2.0+.
- Using conda, install this repo's dependencies:
conda env create -f env_<OS>.ymlOS is either linux or windows.
- Activate the conda env before running any of the code below:
conda activate motion_prediction- Install Blender OR OpenSCAD
- Add the directory of Blender OR OpenSCAD to
$PATHOR toBLENDER_OR_OPENSCAD_PATHinexperiments/config.py. It will accordingly be used byTrimeshto calculate the intersection volume necessary for the VOE.
Note: Blender 2.92 was tested and used to produce the results in the paper.
Two datasets are supported by this repo:
- HRI synthetic dataset
- Mogaze real-world dataset:
The HRI synthetic data developed in this work are located under: data/hri_data. The data contains three scenario
types:
co-operating, co-existing and noise. To visualize a scenario, in a terminal with the motion_prediction conda env
active, run:
python utils/data_hri_vis/hri_utils.py --scenario combined --dataset_type trainParameters are:
--scenarioor-s: scenario to visualize:co_operating: human waiting for the robot to stop, then working directly beside the robot's workspaceco_existing: human working far away from the robotnoise: human walking beside the robotcombined: all of the above
--dataset_typeor-t: dataset type to use:train: data used in training the modeltest: data used in testing the model (scenarios are slightly different than above, please check the paper).
Mogaze data daily human manipulation scenarios, captured in the
real-word. We currently provide the data from Mogaze in the repo to ease repeating the exepriments results. However, the
data can also be downloaded from the official repo Mogaze. Place
any downloaded scenarios in its folder under data, e.g., for p1_1:
data/p1_1/p1_1_gaze_data.hdf5data/p1_1/p1_1_human_data.hdf5
To visualize the data, run:
python utils\data_mogaze_vis\mogaze_utils.py -s p2_1Parameters are:
--scenarioor-s: scenario to visualize:p1_1: mogaze user 1 scenario 1 (used for training)p2_1: mogaze user 2 scenario 1 (used for testing)
Run experiments/experiments_run.py with the parameters:
--architectureor-a: model architecture to use, eitherredorprednet--experimentor-e: experiment to run based. Possible experiment keys are:msm_with_goal: HRI Multiple Scenario data (i.e. training includes data from noise, co-existing and co-operating scenarios) with considering goal position at the inputmsm_without_goal: same as above but without considering goal position at the inputssm_co-existing: HRI Single Scenario data using data only for co-existing with considering goalssm_co-operating: HRI Single Scenario data using data only for co-operating with considering goalssm_noise: HRI Single Scenario data using data only for noise with considering goalmogaze: Mogaze data with considering goal at the inputmogaze_without_goal: same as above but without considering goal at the input
Note: make sure you have your conda env is activated!
# Training "PredNet MSM" on HRI data:
python experiments\experiments_run.py -a prednet -e msm_with_goal
# Training "PredNet MSM" on Mogaze data:
python experiments\experiments_run.py -a prednet -e mogaze
# Training "RED" on HRI data:
python experiments\experiments_run.py -a red
# Training "RED" on Mogaze data:
python experiments\experiments_run.py -a red -e mogazeYou can test the trained models with two metrics:
- Mean Absolute Error (MAE)
- Volumetric Occupancy Error (VOE).
Note: By default, the configurations are set to repeat the paper results, in config.py.
- For HRI: e.g., for calculating MAE for PredNet MSM on all scenario types:
python experiments/mae_hri.py -a prednet --action combined --avoid_goal FalseParameters:
-
--architectureor-a: trained model architecture to use, eitherredorprednet -
--actionor-ac: action / scenario type. Possible keys are: co_exsiting, co_operating, noise, combined -
--avoid_goalor-ag: true for avoiding passing the goal position at the input By default, the trained models provided with the repo and selected checkpoints are used to give the same values as reported in the paper. To change the configurations, checkconfig.py. -
For Mogaze dataset
# For Prednet MSM
python experiments/mae_mogaze.py -a prednet -ag False
# For RED
python experiments/mae_mogaze.py -a red -ag FalseParameters: same as above. However, `--action`` parameter is not required.
- For HRI:
# run VOE on PredNet MSM for all HRI scenarios
python experiments/voe_hri.py -a prednet --action combined --avoid_goal False
# run VOE on PredNet MSMwG for all HRI scenarios
python experiments/voe_hri.py -a prednet --action combined --avoid_goal True
# run VOE on RED for all HRI scenarios
python experiments/voe_hri.py -a red --action combined --avoid_goal False- For Mogaze dataset
# VOE of PredNet MSM
python experiments/voe_mogaze.py -a prednet --avoid_goal False
# VOE on PredNet MSMwG
python experiments/voe_mogaze.py -a prednet --avoid_goal True
# VOE on RED
python experiments/voe_mogaze.py -a red --avoid_goal FalseNote: time for calculating VOE takes long.
This repo uses/includes:
- a modified version of robosuite: to create the workplaces in mujoco,
- sample data (p1_1 and p2_1) downloaded from Mogaze and
- code from RED adapted for evaluation against PredNet.
- HRI human model (.mjcf) is a modified version from https://github.com/mingfeisun/DeepMimic_mujoco
- Mogaze human model (.mjcf) is created from the official human-model (.urdf) provided by Mogaze
- For the self developed HRI synthetic dataset, the code from @Jiacheng Yang is used, which he developed during his master thesis at University of Stuttgart. However, only the generated data is provided in this repo.
We do not intend to violate any license of the aforementioned libraries or dependencies used in this project, some of
which are also listed under env_<OS>.yml. If we violate your license(s), please let us know!
If you use our code, please cite us:
@INPROCEEDINGS{elshamouty_and_pratheepkumar_2021,
author = {El-Shamouty, Mohamed and Pratheepkumar, Anish},
title = {PredNet: a simple Human Motion Prediction Network for Human-Robot Interaction},
booktitle = {2021 26th {IEEE} International Conference on Emerging Technologies and Factory Automation (ETFA)},
year = {2021}
}
MIT
We thank @Danilo Brajovic for checking the code before publishing it.