Skip to content

Commit

Permalink
update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
Uio96 committed Jun 8, 2022
1 parent 6183ef9 commit b640548
Show file tree
Hide file tree
Showing 3 changed files with 5 additions and 3 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ For hardware-accelerated ROS2 inference support, please visit [Isaac ROS CenterP
./make.sh
~~~

4. Download our [pre-trained models](https://drive.google.com/drive/folders/16HbCnUlCaPcTg4opHP_wQNPsWouUlVZe?usp=sharing) for CenterPose and move all the `.pth` files to `$CenterPose_ROOT/models/CenterPose/`. Download our [pre-trained models](https://drive.google.com/drive/folders/1zOryfHI7ab2Qsyg3rs-zP3ViblknfzGy?usp=sharing) for CenterPoseTrack and move all the `.pth` files to `$CenterPose_ROOT/models/CenterPoseTrack/`. We currently provide models for 9 categories: bike, book, bottle, camera, cereal_box, chair, cup, laptop, and shoe.
4. Download our [CenterPose pre-trained models](https://drive.google.com/drive/folders/16HbCnUlCaPcTg4opHP_wQNPsWouUlVZe?usp=sharing) and move all the `.pth` files to `$CenterPose_ROOT/models/CenterPose/`. Similarly, download our [CenterPoseTrack pre-trained models](https://drive.google.com/drive/folders/1zOryfHI7ab2Qsyg3rs-zP3ViblknfzGy?usp=sharing) and move all the `.pth` files to `$CenterPose_ROOT/models/CenterPoseTrack/`. We currently provide models for 9 categories: bike, book, bottle, camera, cereal_box, chair, cup, laptop, and shoe.

5. Prepare training/testing data

Expand Down Expand Up @@ -99,7 +99,7 @@ python demo.py --demo webcam --arch dla_34 --load_model ../path/to/model --track

## Training

We follow the approach of [CenterNet](https://github.com/xingyizhou/CenterNet/blob/master/experiments/ctdet_coco_dla_1x.sh) for training the DLA network, reducing the learning rate by 10x after epoch 90 and 120, and stopping after 140 epochs.
We follow the approach of [CenterNet](https://github.com/xingyizhou/CenterNet/blob/master/experiments/ctdet_coco_dla_1x.sh) for training the DLA network, reducing the learning rate by 10x after epoch 90 and 120, and stopping after 140 epochs. Similarly, for CenterPoseTrack, we train the DLA network, reducing the learning rate by 10x after epoch 6 and 10, and stopping after 15 epochs.

For debug purposes, you can put all the local training params in the `$CenterPose_ROOT/src/main_CenterPose.py` script. Similarly, CenterPoseTrack can follow `$CenterPose_ROOT/src/main_CenterPoseTrack.py` script. You can also use the command line instead. More options are in `$CenterPose_ROOT/src/lib/opts.py`.

Expand Down
2 changes: 2 additions & 0 deletions data/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,3 +15,5 @@ Example code:
`python download.py --c chair`

`python preprocess.py --c chair --outf outf_all --frame_rate 1`

Note that we set frame_rate as 15 for CenterPose while we set frame_rate as 1 for CenterPoseTrack.
2 changes: 1 addition & 1 deletion src/tools/objectron_eval/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,5 +31,5 @@ To evaluate on multiple categories, we wrap the evaluation code into two scripts
`shell_eval_image_CenterPose.py` runs on the original officially released preprocessed dataset (image).
(We do not use it for our paper.)

`shell_eval_video_CenterPose.py` runs on the re-sorted officially released preprocessed dataset (video).
`shell_eval_video_CenterPose.py` and `shell_eval_video_CenterPoseTrack.py` run on the re-sorted officially released preprocessed dataset (video).

0 comments on commit b640548

Please sign in to comment.