New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Big thanks from RoboCup@Home #175

Closed
LoyVanBeek opened this Issue Jul 30, 2017 · 4 comments

Comments

Projects
None yet
4 participants
@LoyVanBeek

LoyVanBeek commented Jul 30, 2017

Hi @gineshidalgo99 and other contributors,

On behalf of the RoboCup@Home Technical Committee, I just want to give a big thanks to OpenPose. Many teams in RoboCup@Home used OpenPose since this year and it really improved the level of the teams and the competition.

For example, in our Restaurant task, customers can wave to the robots and the robots take your order. This is just one of several use cases. In previous years, this did not work very well for the 1-2 teams that tried and now this seems like a solved problems.
You can now point at your robot and have it do stuff with the object pointed to

Again, thnx from @RoboCupAtHome

P.S. sorry for not sticking to the template, but this really didn't fit.

@tsimk

This comment has been minimized.

Contributor

tsimk commented Jul 30, 2017

Very cool stuff :) Glad your community is finding it useful, this is precisely the kind of thing we were hoping for when we released it.

Out of curiosity, how did you compute 3D locations for the skeleton joints?

@gineshidalgo99

This comment has been minimized.

Member

gineshidalgo99 commented Jul 31, 2017

@LoyVanBeek Thank you so much! We are really glad our technology can help and be used in several research areas and challenges as RoboCup!

@shivenmian

This comment has been minimized.

Contributor

shivenmian commented Aug 1, 2017

@LoyVanBeek great stuff! Could you explain how you got the openpose keypoints as 3D? Did you use 3D reconstruction or so?

@LoyVanBeek

This comment has been minimized.

LoyVanBeek commented Aug 2, 2017

You can run OpenPose on the images from a Kinect. The correspondence between the RGB and depth sensors is good enough to do essentially a lookup.

Some little video of this in action again: https://youtu.be/WGYDB-6KSx8?t=2m5s (but with some more fancy stuff added, like the faces on top of the 3D skeletons). This particular bit is by @tue-robotics

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment