Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How To do position retarget like AnyTeleop #14

Open
xbkaishui opened this issue May 27, 2024 · 6 comments
Open

How To do position retarget like AnyTeleop #14

xbkaishui opened this issue May 27, 2024 · 6 comments

Comments

@xbkaishui
Copy link
Contributor

Hi
it's a greate work. I sam hand detection retarget logic, how about wrist pose retarget using RBG-D camera ?
as it mentions in AnyTeleop paper, can you give some code examples?

how to convert wrist frame to camera frame ?

Thanks

@yzqin
Copy link
Member

yzqin commented May 27, 2024

Hi @xbkaishui

I'm not quite sure I understand your question. Could you clarify what you mean by "wrist pose retarget" in the context of AnyTeleop?

Also, I'm not sure why I would need to convert the wrist frame to the camera frame for teleoperation. Could you provide some more details or context about your question?

@xbkaishui
Copy link
Contributor Author

Hi @yzqin

Thanks for your quick response, I want to use Wrist Pose Detection result to teleoperation the robot arms

Below is reference from original paper Wrist Pose Detection from RGB-D.

We use the pixel positions of the detected keypoints to retrieve the corresponding
depth values from the depth image. Then, utilizing known intrinsic camera parameters, we compute the 3D positions
of the keypoints in the camera frame. The alignment of the RGB and depth images is handled by the camera
driver. With the 3D keypoint positions in both the local wrist frame and global camera frame, we can estimate the wrist
pose using the Perspective-n-Point (PnP) algorithm.

Here, the 3D camera captures depth information. The wrist position can be obtained from keypoints within the wrist coordinate system, and the wrist position can also be observed in the camera coordinate system. Should we create a transformation matrix between the wrist coordinate system and the camera coordinate system here? The wrist pose from the camera's perspective should be this transformation matrix, right?

@yzqin
Copy link
Member

yzqin commented May 31, 2024

Hi @xbkaishui

Apologies for the delayed response; I was traveling.

To clarify, we use the FrankMocap model for wrist pose estimation in this project. However, due to licensing restrictions, we cannot directly release that part of the code within AnyTeleop but you can download FrankMocap yourself for free.

If you're interested in exploring wrist pose detection from RGB-D data, you can find our previous implementation here:

Code: https://github.com/yzqin/dex-hand-teleop/blob/3f7b56deed878052ec733a32b503aceee4ca8c8c/hand_detector/hand_monitor.py#L102

Let me know if you have any other questions!

@xbkaishui
Copy link
Contributor Author

Hi Yuzhe

I still don't fully understand the entire data collection process. I'm not clear on how to control the robotic arm using my hand. How are the hand coordinates captured by the 3D camera mapped to the robotic arm? can you explain more ?

Thanks

@yzqin
Copy link
Member

yzqin commented Jun 4, 2024

Hi @xbkaishui

The arm motion control is quiet more complicated than hand retargeting. I am working on some paper submission deadlines for now and I will try to wrap up better documentation about arm later. It requires much more dependencies on the software side and more effort on the tutorial.

@xbkaishui
Copy link
Contributor Author

ok, got it

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants