You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
firstly, thanks to the open-source project, it is easy to use~
Meanwhile, in the training and later inference experiments, are the poses of the peg and hole known (in the world coordinate system)? Have U considered whether a large deviation in the pose of the peg will have a greater impact on the verification of the algorithm(when using the real UR robot)?
The text was updated successfully, but these errors were encountered:
Hi @Mickeyyyang! Glad that you found our project useful!
The poses of the peg and the hole are only used during training to calculate the loss. During the inference, the robot is only given the image. As you can see we only randomize the initial hole pose. We could of course randomize the initial peg pose as well, but as the motion of the robot is relative to the image, this is not expected to affect the robustness actually.
This is only true if we don't use robot state as input, if we do, then it would make an actual difference. But in our experiments, feeding the robot state into the model (e.g. joint angles) didn't show any advantage.
firstly, thanks to the open-source project, it is easy to use~
Meanwhile, in the training and later inference experiments, are the poses of the peg and hole known (in the world coordinate system)? Have U considered whether a large deviation in the pose of the peg will have a greater impact on the verification of the algorithm(when using the real UR robot)?
The text was updated successfully, but these errors were encountered: