-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support Demoing HigherHRNet #29
Comments
Could you post the full error message? btw I edited the original post to fence the codes |
Traceback (most recent call last):
File "demo/video_demo_with_mmdet.py", line 110, in <module>
main()
File "demo/video_demo_with_mmdet.py", line 90, in main
format='xyxy')
File "/mmpose/mmpose/apis/inference.py", line 237, in inference_pose_model
pose = _inference_single_pose_model(model, img_or_path, bbox)
File "/mmpose/mmpose/apis/inference.py", line 184, in _inference_single_pose_model
data = test_pipeline(data)
File "/mmpose/mmpose/datasets/pipelines/shared_transform.py", line 70, in __call__
data = t(data)
File "/mmpose/mmpose/datasets/pipelines/shared_transform.py", line 119, in __call__
meta[key] = results[key]
KeyError: 'flip_index' this is what I get if I just run the command. When I add flip_index to the dictionary that you are parsing to be [0,1] I get the error mentioned above |
I have also verified that the example configuration you give in the docs works on my machine. It's only when I try to modify that command to use higher_hrnet that things go bad. |
Okay, its because the demo was written for top-down models. Demos for bottom-up will be added soon. |
then maybe rename demo file to point this? |
good idea |
Hi, thanks for this great tool! I'm looking for a demo for bottom-up inference. Is it available now? |
* [WIP] abstract-data-structure * update docs * update * update BaseDataSample * fix comment and coverage 100% * update and add _set_field * update * split into base_data_element and base_data_sample * update * update * update * fix typo
Hi,
Thank you for your interesting work. I am trying to test HigherHRNet on a collection of videos I've found very challenging for existing human pose estimation models. However when I try to execute the following command:
python demo/video_demo_with_mmdet.py $SCRATCH/mmdetection/configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py $SCRATCH/mmdetection/checkpoints/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth configs/bottom_up/higherhrnet/coco/higher_hrnet48_coco_512x512.py checkpoints/higher_hrnet48_coco_512x512-60fedcbc_20200712.pth --video-path $SCRATCH/VIBE/sample_video.mp4 --show --out-video-root ./
I get a number of config related issues, the first is that the image size seems to be treated as both an integer as well as a list, I made a temporary fix with a try except statement, but the second issue I'm dealing with now is the lack of a flip_index.
I am not sure what purpose the flip_index has but I've tried making it 0 or [0,1] as I saw in another example and neither of these work.
For a detailed traceback I get
Hope you can help me, I'm really excited about your work and hoping to be able to make contributions in the future.
The text was updated successfully, but these errors were encountered: