Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How wo get the groundtruth point cloud with normal vector from ShapeNet dataset? #27

Closed
happyday521 opened this issue Nov 9, 2018 · 8 comments

Comments

@happyday521
Copy link

Given the 3D model in the shapenet dataset(.obj files), how can I get the groundtruth point cloud with normal vector in your way to train my own model?
Besides, You said that you transformed it to corresponding coordinates in camera coordinate based on camera parameters from the Rendering Dataset. I don't understand it clearly. Can you share me how to preprocess the groundtruth point to prepare my data to train your model? I am not familiar with it indeed. Thanks very much!

@nywang16
Copy link
Owner

@tangjilin see data_generation folder

@happyday521
Copy link
Author

@nywang16 Thanks very much!

@KnightOfTheMoonlight
Copy link

Hi, @nywang16 After checking the https://github.com/nywang16/Pixel2Mesh/blob/master/GenerateData/4_make_auxiliary_dat_file.ipynb. I have the following confusions about the data generation part.

About the original data from shapenetcore.v1, are they all resized to be put in a unit cube, which are of size (-0.5,0.5)^3? And why would you choose ellipsoid with a size of (0.2,0.2,0.4) radius?

And you use your rendering_metacamera.txt annotations to transform 3d object to camera coordinates?

@walsvid
Copy link
Collaborator

walsvid commented Feb 25, 2020

Hi @KnightOfTheMoonlight , our method is not sensitive to initial shape, you can also use sphere. The reason for using the ellipsoid as the initial may be that the average shape of the object is closer to the ellipsoid.

@EdwardSmith1884
Copy link

It is kind of sensitive though right? You want the initial mesh's vertices to project "nicely" over the input image so that the sampled image features which are placed on those vertices properly capture the information in the image. If the range of vertices when projected into images space is too small in either direction information about the object will be lost, and similarly if the range is too large. I would think any shape with uniformly distributed vertices in the range of (-0.5,0.5)^3 would be fine.

@walsvid
Copy link
Collaborator

walsvid commented Feb 25, 2020

Hi, @EdwardSmith1884 . I don't think the initial shape is very sensitive. According to the performance of graph convolution during training, no matter what the initial shape is, it seems that graph convolution tends to learn the shape completely from scratch. The shape of the three-dimensional mesh output by the first few epochs and initial mesh are basically irrelevant, but is mainly affected by topology.
At the same time, I also agree that what you said may be a more natural approach to uniform sampling in three dimensions.

Specifically, our experiments in the ECCV supp are as follows.
截屏2020-02-2523 22 54
截屏2020-02-2523 23 05

@EdwardSmith1884
Copy link

I think these all do well because they basically have uniformly distributed vertices in the range of (-0.5,0.5)^3 . I would think if the ellipsoid was very thin in one dimension, or the sphere was 1/10 the scale it would do worse. I guess I mean shape doesn't really matter but scale does, at least I observed worse much worse accuracy when I didn't scale the initial mesh properly.

@zshyang
Copy link

zshyang commented Sep 15, 2020

Hi,
Do you know how to download the dataset and unzip it from the link below?
https://drive.google.com/open?id=131dH36qXCabym1JjSmEpSQZg4dmZVQid

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants