Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

performance for in the wild #21

Closed
slava-smirnov opened this issue Nov 26, 2020 · 4 comments
Closed

performance for in the wild #21

slava-smirnov opened this issue Nov 26, 2020 · 4 comments

Comments

@slava-smirnov
Copy link

hey!

great work! any rough performance benchmarks for in the wild possibly to share? I realise it depends on resolution and duration but if you could share some rough breakdowns that would help a lot

@Shimingyi
Copy link
Owner

Hi @slava-smirnov ,

Very good question. In my experience, roughly there are three points:

  1. 2D detection: Our method uses 2d detection from Openpose as input, so giving a good 2d detection is the most important in this kind of setting. I will suggest the boundding box of detected person shouldn't be smaller than 256px so the 2d detection error will be in a acceptable range. And also, we can also smooth the 2d results or use better 2d detetor like [staf].(https://github.com/soulslicer/STAF/tree/staf), it must be helpful.
  2. Camera view: It also should be concerned. Our method is trained on Human3.6m dataset which includes 4 static cameras, which it's not enough for training a manner to estimate camera parameter. I will suggest to keep the camera always in front of the person like the videos which we select in our demostration.
  3. This method didn't require the specific format or parameter on video, any sequence which is longer than 101 frames(around 5s with 25 fps) can be feed into our pretrained network. In general, the fps will not be the problem, but if you capture something like 120 fps or higher, you need to downsample it to make the motion difference distribution be similar with training data.

@slava-smirnov
Copy link
Author

hey! thnx for quick response. for 1 and 2 it’s quite clear. I was hoping for some inference time per frame on 3D component part (everything else other than 2D predictions)

@Shimingyi
Copy link
Owner

It's a good idea, but currently the key idea of this paper is using kinematic knowedge in the architecture level so I haven't done more in the per frame inference. There is a paper called SPIN which can do an optimization per frame after the first step prediciton, you can follow it to get more inspirations. But I will keep working on this task so I believe it will be more stronger :)

@slava-smirnov
Copy link
Author

Got you. I'll share numbers if/whenever I have them. Anyway bringing learned kinematics is a significant contribution! Great job

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants