-
Notifications
You must be signed in to change notification settings - Fork 233
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Generate some strange grasps #59
Comments
I don't know if it can help but your qhd cloud should be linked to kinect2_rgb_optical_frame |
@nevermore0127 thanks for reply , I have change the fixed frame to kinect2_rgb_optical_frame,but the previous problem still exists |
Tutorial 2 offers a possible solution where grasps are not sampled on the table plane. |
@atenpas thanks for reply,I have try to create the CloudSamples based on the point cloud('/kinect2/qhd/points') and the detection range, and then sends it to GPD,but the Terminal display error,and I don't konw how to fix it. (run the tutorial1.launch under ROS with Kinect) terminal:
code:
tutorial1.launch
|
The error is in this line:
The |
@atenpas thanks a lot !! I have changed the code to:
it's work now,and gpd can generate some grasps,but i still confuse about CloudSamples.
it means the gpd detection range is x[-1,1] y[-1,1] z[-1,1]? |
No. If you read the documentation for CloudSamples, you can see that |
@atenpas Thanks for your patience.I have filtered point clouds from kinect using PCL(cloud_filter.cpp),and then creates CloudIndexed based on the filtered point cloud , finally, send it to gpd(grasp.py),but the CloudIndexed terminal always output
and can not see any grasps in rviz. cloud_filter.cpp
grasp.py
tutorial1.launch
|
Does the plane fitting work? I think you just copied the code from Tutorial 2, right? That won't work because it looks like your point cloud is in a different frame than in the tutorial. I would suggest you use a more general plane fitting method, like the one in PCL. |
@atenpas the plane fitting doesn't work,I just copied the code from Tutorial 2, but I have change the point cloud frame to "kinect2_link"(I'm using kinect v2),it still doesn't work. As for Plane model segmentation,I have used it to successfully cut out the table plane.but if I cut out the table plane, I have get grasps that approach an object from below the table.so I use CloudIndexed with Plane model segmentation.Below is the code.I still get grasps that approach an object from below the table,and I don't konw where the problem is?
|
Sounds like you're on the right track! GPD generates grasps from all kinds of directions towards an object (for more details, see our paper). Your next step should be to filter out the grasps that approach an object from below the table (you would need to write the code for this). |
@atenpas thanks for your encouragement, I found a param called "filter_grasps" in tutorial1.launch,Is it used to filter grasps? or Can you give me some suggestions for filtering out the grasps that approach an object from below the table? |
The For your own filter, you can compare the grasp approach axis to some other axis, e.g., one that is orthogonal to the table plane. You could look at the dot product between the vectors corresponding to these axes to determine which grasps approach objects from below the table. |
@atenpas Hey! I did some similar work like @onepre did ,and also piontcloud segmentation tryings.But i found that the grasps are not so good .The grasping positions are a little strange and the grasps generated for the segmented object seem to be collided with other things and the table. So,any ideas to solve this? thanks a lot |
You can filter those that collide with the table.
What does that mean? Do you have a screenshot? |
@atenpas thanks a lot, Now the program can generate some correct grasps. But there are still two problems. |
What does that mean?
This depends mainly on the density of the point cloud and the number of samples. The less samples, the faster.
We use a structure.io depth camera (no RGB): https://structure.io/structure-sensor |
sorry for my question is unclear, I meant that Split point clouds from kinect in real time. A warnning appeared today.and I don't konw if it will affect the grasps?
|
@onepre |
This has already been explained in a comment above:
So you would be comparing the grasp approach vector with a vertical, upward-pointing vector. If both vectors point in about the same direction (corresponding to a dot product close to 1), this means that the grasp approaches an object from below, and should be filtered out. |
@atenpas Here is my code:
I got these error message:
Is it need convert to any special data type from the GraspConfig.msg(R) matrix? |
It looks like By the way, your issue is unrelated to GPD. Please use google or stackoverflow to solve such issues. |
@atenpas |
@atenpas Hi , I run tutorial1.launch with ROS and kinect.I placed some objects on a desktop,and gpd can generate grasps,but sometime it generate some strange grasps(directly through the desktop),l am confused by this phenomenon, and do not know how to resolve this problem.
The text was updated successfully, but these errors were encountered: