Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Re-implentation in real world #34

Closed
mousecpn opened this issue Feb 27, 2024 · 6 comments
Closed

Re-implentation in real world #34

mousecpn opened this issue Feb 27, 2024 · 6 comments

Comments

@mousecpn
Copy link

Hello! Thanks for your brilliant work.
I am implementing GIGA in the real-world setting based on the VGN code. However, the process doesn't go smoothly. I met the following problems:
(1) Can the checkpoints provided in the repo be used directly in the real-world setting? Or do we need to retrain the model with a different setting?
(2) I found that the generated grasps were not of high quality. Lots of them caused collisions with the object. I checked the point cloud collected by the camera and it is noisy. Does it matter? Do I need to do some post-processing? I use a realsense camera to re-implement it.
Depthmap from simulation
image
Depthmap from realsense camera
image

(3) For some grasps, the robot will collide with the object (or table) before it approaches the pre-grasp pose. What planner did you use to avoid it?

If you can give me some suggestions on it, I would really appreciate that.

@Steve-Tod
Copy link
Collaborator

  1. GIGA focuses on generating grasps from a fixed single view. Although we have done some randomization on the camera view, GIGA has limited generalizability to arbitrary view. So I would suggest first trying aligning your camera view with the one we used in the simulation as accurate as possible and then test the pretrained ckpt.

  2. As mentioned above, aligning the camera view should help improve the result. Our real-world depth is also very noisy, I think it should be fine as long as objects are correctly captured.

  3. We didn't use any planner. The collision avoidance is actually learned implicitly through data prior. Because in the data generation, a grasp would be labelled as fail if it collides with other objects before getting to the target grasp pose. So our grasp predictor always prioritize the grasps that would not cause any collision.

@mousecpn
Copy link
Author

mousecpn commented Mar 8, 2024

Following your advice, we set the camera pose as the same as in the simulation.
image

But the quality of the generated grasp is still low.
image

With the packed checkpoint you provide, I can only realize 20% grasp success rate.

Can you give me some advice what to do next?

@Steve-Tod
Copy link
Collaborator

Hi, I think I know the issue. During real-robot experiment of GIGA, we found we were not able to align camera view perfectly, so we additionally trained a model with randomized side view (basically adding noise on the camera view when generating data). The resulting model is more robust to the misalignment of the camera view. I also uploaded those models here (https://utexas.box.com/s/47po84j62g7zgwogpr453sl3ch64vm6q).

Actually we kinda reproduce the GIGA on the real robot about a year ago and these models work fine. Only thing we need to fix is tuning an offset and add that to the predicted grasp, because the predicted grasp is with respect to the root of the gripper, while our real gripper is a bit different from the simulated one. You might also need to go through this.

Please let me know if this helps!

@mousecpn
Copy link
Author

Hi, I use your checkpoint but the accuracy is still low.
May I ask what camera are you using? And what configuration did you set (resolution, etc.)

@mousecpn
Copy link
Author

I think I found the bug. In the simulation, there is always a 5cm table in the workspace.
In order to generate a collision-free grasp, the model will tend to generate grasps much higher than 5cm. Besides, the grasps below 5.5cm will be filtered.
So I lower the task frame and everything works fine now.

@Steve-Tod
Copy link
Collaborator

Sorry I was planning to look into this today. But glad you found the issue and GIGA works out now!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants