Robot Peron Following Under Partial Occlusion
Prequities
- ROS, verified in melodic and noetic
- OpenCV with 3.4.12
- Ceres
- Create a conda environment and install pytorch
conda create -n mono_following python=3.8
conda activate mono_following
# This is based on your GPU settings, other settings should be careful
conda install pytorch torchvision cudatoolkit=10.2 -c pytorch
- Install python related packages:
pip install -r requirements.txt
git clone https://github.com/eric-wieser/ros_numpy
cd ros_numpy
python setup.py install
- Install cpp related packages:
- OpenCV==3.4.12
- Eigen==3.0+
- Download bounding-box detection models from Google Drive or [YOLOX_deepsort_tracker], then make director
mono_tracking/scripts/AlphaPose/YOLOX/weights
and put the checkpoints to it. - Download 2d joint detection models: Google drive, then make directory
mono_tracking/scripts/AlphaPose/Models
and put the checkpoints to it.
Run with our self-built dataset as ROSBAG:
roslaunch mono_tracking all_mono_tracking.launch sim:=true
# play bag
rosbag play --clock -r 0.2 2022-07-15-17-09-34.bag
Run with our DINGO:
roslaunch mono_tracking all_mono_tracking.launch sim:=false
Run with icvs datasets as ROSBAG, and evaluate:
# If run in "corridor_corners" scene
roslaunch mono_tracking evaluate_MPF_in_icvs.launch scene:=corridor_corners
@inproceedings{ye2023robot,
title={Robot Person Following Under Partial Occlusion},
author={Ye, Hanjing and Zhao, Jieting and Pan, Yaling and Chen, Weinan and He, Li and Zhang, Hong},
booktitle={2023 IEEE International Conference on Robotics and Automation (ICRA)},
pages={7591--7597},
year={2023},
organization={IEEE}
}