Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Objects texture #11

Closed
omarirfa opened this issue Aug 7, 2022 · 6 comments
Closed

Objects texture #11

omarirfa opened this issue Aug 7, 2022 · 6 comments

Comments

@omarirfa
Copy link
Contributor

omarirfa commented Aug 7, 2022

Hi,

I was just playing around with the code a bit more to see what objects can have pose estimation done on, it seems for objects that are either too small like lego pieces or objects that are too big they lose their texture for some reason.

I have two questions:

  1. You mentioned earlier to use aruco markers for better texture, however I am unable to get enough textures in a 1 minute video for the object image provided below. Any suggestions on how to tackle this?
  2. Thoughts on detecting lego pieces? i.e. can I do motion reconstruction on a single lego piece?

Object for reference:
image

@liuyuan-pal
Copy link
Owner

Hi,

  1. I think Gen6D is able to work on the given reference object. The textures of the reference images are used in the camera poses tracking of COLMAP (to define the reference coordinate of the object). We actually do not require the object to contain many textures. (e.g. the mouse object does not contain many textures.)
  2. I'm not sure about the problem. Maybe, you could upload one of your custom datasets and I'll process it as an example for you.

@omarirfa
Copy link
Contributor Author

omarirfa commented Aug 8, 2022

I appreciate the help! I have added .rar file link, since github wont allow uploading files over 25 MB, containing the reference video along with the test video and colmap if needed.

Its strange though for the lego piece it seems like its so noisy near the edges of the lego piece as compared to other objects.

Lego colmap:
image

Link to .rar file:
https://drive.google.com/file/d/164w0kkQUsC-W8fExJXuNPCgDveU2ReRW/view?usp=sharing

@liuyuan-pal
Copy link
Owner

Hi, I have processed the meta information for you at
https://drive.google.com/file/d/101IFEjrk_c7xHoCS08vU0Sexfr2V2XYL/view?usp=sharing

The initialization is OK, looks like this:
0

However, when you flip the object or occlude the object, the pose tracking is lost because the input images do not contain bottom-up viewpoints and Gen6D is not very robust to occlusions.

@liuyuan-pal
Copy link
Owner

BTW, in this lego object, you do not need to transpose the input video so you do NOT need the --transpose when using predict.py.

@omarirfa
Copy link
Contributor Author

Thanks for the files! I had a few questions regarding some of the comments you mentioned:

  1. I was curious as to what the reason is that you do not need transpose on the lego object i.e. is there a criteria to say which objects need --transpose when using predict.py and which dont?
  2. Also the object_point_cloud that was created for the lego object, the segmentation is quite different as compared to mouse one i.e. the aruco board is still visible. My question is I have done segmentations like the lego one and the mouse one for different objects, how do you know which of the segmentations is correct i.e. should it be the lego one or the mouse one? Is it just by looking image_inter everytime and doing a trial and error if the object is being tracked correctly?

Lego object:
image

Mouse object:
image

@liuyuan-pal
Copy link
Owner

Hi,

  1. The video captured by me does not transpose correctly if I directly read it with python-opencv. The transpose is just to ensure the image is oriented normally (not 90 degree rotated, not flipped, not upside-down).
  2. It doesn't matter whether object_point_cloud.py contains the board or not. The object_point_cloud.ply is used in computation of the object center and the object scale as
    self.object_point_cloud = load_point_cloud(f'{self.root}/object_point_cloud.ply')

    If such point cloud contains points far away from the object, then the scale computation will be incorrect and the object in the reference images will be very small. If the center of the point cloud is not the object center, then the reference images will not be centered on the object. So what really matters is the scale and the center of the object point cloud.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants