New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inaccurate grasp predictions #4
Comments
Hi, thanks for the question. In training, we do not apply transformation augmentation (while the ObMan dataset has a small object translation coverage in 3D). So the model will not be able to work on out of distribution object position. This can be solved by:
BTW, the model may not be able to generate hands for incomplete point clouds. |
Thanks for your quick reply. Will you be able to help me understand the object distribution that you are using? Or if it depends on the ObMan dataset used for training, then how can I find out the distribution that was used in the dataset. Sorry if this is a naive question, but I am little new to this area. |
The translation you suggested did help! Thanks for that, but I will still love to understand how did you come up with the distribution and translation for the network? |
Actually, this also happened when I want to make use of this model |
Maybe a straightforward method is checking the range and mean of the ObMan hand translation to understand the data distribution. |
Thank you so much! After following as you suggested I tried using to predict for complete pointclouds and to my surprise the results were kind of unexpected and interesting (see the attached images).
|
Yes, you should scale the input object point cloud to roughly match the size of the hand |
I want to know what the code corresponding to the link you gave in this answer means? I know it is used for translation, but I'm curious about what you mean by the initial value "np. array ([- 0.0793, 0.0208, -0.6924])"? |
It is a random value sampled from the ObMan dataset translation distribution. |
Hello, firstly I want to mention this is some great work!
I was using the trained model to generate grasps for my own object pointclouds generated from simulation. Surprisingly the generated hand vertices were very distant from the object. I am not very sure the reason for this, is there any requirement on the input object pointcloud origin and axes orientation before using the network that I may have missed?
I am attaching an image of the predicted grasp for one of the input object pointcloud I used:
The text was updated successfully, but these errors were encountered: