Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

leaf node generation for real-world experiments #19

Closed
gokul-gokz opened this issue Dec 28, 2022 · 3 comments
Closed

leaf node generation for real-world experiments #19

gokul-gokz opened this issue Dec 28, 2022 · 3 comments

Comments

@gokul-gokz
Copy link

Hello
Great work.
I am curious on the following two aspects of the work in real-world use-case.

  1. Validity of the tree and leaf nodes after placing an item:
    In real-world scenarios, the actual pose where the robot placed the object and the pose given by the policy might differ a little because of manipulation errors and noise. Additionally, there's also a chance that the current object being placed might move or disturb the previously placed objects. In these scenarios, some of the existing nodes in the tree might not be valid(as corner points might have moved by small amount) and so the whole tree structure need to be regenerated to represent the current state of the place bin accurately. How do you address this? As far as I understood from the paper(correct me if I am wrong), only the leaf nodes are removed if they are not valid but what about the internal & leaf nodes whose values are slightly off?

  2. Orientation representation in state:
    While considering a new incoming item(Sx,Sy,Sz), if we want to consider the placement with more than one orientation for the item, how are these orientation values encoded into the state representation.

@alexfrom0815
Copy link
Owner

Hello, thanks for your attention!

  1. If the position of the placed object is easy to shift, we can regenerate the leaf nodes from scratch each time instead of incrementally generating them. This is feasible and does not take too much time overhead.
  2. The current version of the code supports orientation representation. It is integrated in the descriptor of the leaf node.

@gokul-gokz
Copy link
Author

Let's take the scenario where the recently placed object has moved previously placed objects. Because of this the internal node set 'Bt' and the leaf node set 'Lt' is no longer valid anymore. If we want to regenerate the tree from scratch, we need to know exactly where did each and every object moved to create a correct spatial relation tree. To do this, we need to map each and every object in the camera image , to create the internal node set representation 'Bt' along with leaf node, which is not possible because we don't have information on each object to map them and create the tree, from the camera image.

So, how do you create a tree structure from scratch every-time maintaining the correct spatial relations?

@alexfrom0815
Copy link
Owner

Hello, this is indeed a very interesting question. However, I think it is not impossible to map each and every object in the camera image. In fact, based on our real robot experience, we can use the method of plane segmentation, or learning-based detection method, roughly determine the position of the observable box, and then correct the information of the internal node set 'Bt' and the leaf node set 'Lt', hope this can help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants