New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about Eye-to-hand Scenario Usage #42
Comments
Note that the points that are passed to the |
@benemer Thanks for your reply. I have all the point clouds mapped to the same Base frame and performed point cloud alignment and got transformation for each point cloud. What should I choose as the input of |
These should be the transformed points and the poses in the same coordinate frame. So, if you have transformations from each local frame to the base frame, pass those poses together with the points transformed to the base frame. |
@benemer Thank you fro your patience and apologies for not being specific. |
What exactly is your base frame representing? How are you aligning the points? Can you maybe share the point clouds with us for testing? First looking at this, (AlignedP. BaseTLocal) seems to be correct. |
@benemer |
Hi
I have a depth camera mounted on the base and an object mounted on the end effector of robot arm. After scanning, hand_T_cam was calculated and used as the pose and sent along the scan result for fusion. I thought I could get the result including a clear model of the object and a messy background. However, on the contrary, I got a clear background and a messy foreground. Why is that?
Thanks
The text was updated successfully, but these errors were encountered: