Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about Eye-to-hand Scenario Usage #42

Closed
zyaqcl opened this issue Apr 23, 2024 · 6 comments
Closed

Question about Eye-to-hand Scenario Usage #42

zyaqcl opened this issue Apr 23, 2024 · 6 comments

Comments

@zyaqcl
Copy link

zyaqcl commented Apr 23, 2024

Hi

I have a depth camera mounted on the base and an object mounted on the end effector of robot arm. After scanning, hand_T_cam was calculated and used as the pose and sent along the scan result for fusion. I thought I could get the result including a clear model of the object and a messy background. However, on the contrary, I got a clear background and a messy foreground. Why is that?
截图 2024-04-23 15-58-28

Thanks

@benemer
Copy link
Member

benemer commented Apr 29, 2024

Note that the points that are passed to the integrate function need to be in a common frame. Is that the case on your side?

@zyaqcl
Copy link
Author

zyaqcl commented Apr 30, 2024

@benemer Thanks for your reply. I have all the point clouds mapped to the same Base frame and performed point cloud alignment and got transformation for each point cloud. What should I choose as the input of intergrate? Should that be aligned point clouds with identical poses?

@benemer
Copy link
Member

benemer commented Apr 30, 2024

These should be the transformed points and the poses in the same coordinate frame.

So, if you have transformations from each local frame to the base frame, pass those poses together with the points transformed to the base frame.

@zyaqcl
Copy link
Author

zyaqcl commented Apr 30, 2024

@benemer Thank you fro your patience and apologies for not being specific.
I have a set of point clouds under base frame marked as BaseP, which are transfromed from their own local frame with BaseTLocal.
After performing point cloud alignment, I got AlignT for each point cloud. With BaseP. transform(AlignT) I can get AlignedP.
I have tried 4 setups for the input.
If input is (BaseP, BaseTLocal) I can get correct but not aligned points.
Input (BaseP, AlignT@BaseTLocal) I get similar result as upper one.
If input is (AlignedP. AlignT@BaseTLocal) or (AlignedP. BaseTLocal) , the results do not seem correcrt.

@benemer
Copy link
Member

benemer commented May 1, 2024

What exactly is your base frame representing? How are you aligning the points?

Can you maybe share the point clouds with us for testing?

First looking at this, (AlignedP. BaseTLocal) seems to be correct.

@zyaqcl
Copy link
Author

zyaqcl commented May 7, 2024

@benemer
Thank you. Your solution seems correct.
The base frame is the frame of robot arm end effector which the object is mounted on.
I did the alignment with open3d.registration.registration_icp. The AlignT indicates the result.
Previously I used the AlignT incorrectly so the results are wrong.
It seems that there is little diference between using (AlignedP, BaseTLocal) and (AlignedP, AlignT @ BaseTLocal)

@benemer benemer closed this as completed May 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants