-
Notifications
You must be signed in to change notification settings - Fork 296
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How wo get the groundtruth point cloud with normal vector from ShapeNet dataset? #27
Comments
@tangjilin see data_generation folder |
@nywang16 Thanks very much! |
Hi, @nywang16 After checking the https://github.com/nywang16/Pixel2Mesh/blob/master/GenerateData/4_make_auxiliary_dat_file.ipynb. I have the following confusions about the data generation part. About the original data from shapenetcore.v1, are they all resized to be put in a unit cube, which are of size (-0.5,0.5)^3? And why would you choose ellipsoid with a size of (0.2,0.2,0.4) radius? And you use your rendering_metacamera.txt annotations to transform 3d object to camera coordinates? |
Hi @KnightOfTheMoonlight , our method is not sensitive to initial shape, you can also use sphere. The reason for using the ellipsoid as the initial may be that the average shape of the object is closer to the ellipsoid. |
It is kind of sensitive though right? You want the initial mesh's vertices to project "nicely" over the input image so that the sampled image features which are placed on those vertices properly capture the information in the image. If the range of vertices when projected into images space is too small in either direction information about the object will be lost, and similarly if the range is too large. I would think any shape with uniformly distributed vertices in the range of (-0.5,0.5)^3 would be fine. |
Hi, @EdwardSmith1884 . I don't think the initial shape is very sensitive. According to the performance of graph convolution during training, no matter what the initial shape is, it seems that graph convolution tends to learn the shape completely from scratch. The shape of the three-dimensional mesh output by the first few epochs and initial mesh are basically irrelevant, but is mainly affected by topology. Specifically, our experiments in the ECCV supp are as follows. |
I think these all do well because they basically have uniformly distributed vertices in the range of (-0.5,0.5)^3 . I would think if the ellipsoid was very thin in one dimension, or the sphere was 1/10 the scale it would do worse. I guess I mean shape doesn't really matter but scale does, at least I observed worse much worse accuracy when I didn't scale the initial mesh properly. |
Hi, |
Given the 3D model in the shapenet dataset(.obj files), how can I get the groundtruth point cloud with normal vector in your way to train my own model?
Besides, You said that you transformed it to corresponding coordinates in camera coordinate based on camera parameters from the Rendering Dataset. I don't understand it clearly. Can you share me how to preprocess the groundtruth point to prepare my data to train your model? I am not familiar with it indeed. Thanks very much!
The text was updated successfully, but these errors were encountered: