New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
the preprocessed data does’nt has attribute [pos_normals] or [neg_normals] #5
Comments
Hi,
are you using the version of src/PreprocessMesh.cpp that is included in
this repo or the one from DeepSDF? This repo contains a modified version
that also writes out normals:
https://github.com/edgar-tr/patchnets/blob/1d20bfd1ec74f8c6687939c6092642685fdeee94/src/PreprocessMesh.cpp#L258
On the other hand, DeepSDF only writes out pos and neg (as in your case,
apparently):
https://github.com/facebookresearch/DeepSDF/blob/main/src/PreprocessMesh.cpp#L220
It is best to first install DeepSDF and to then overwrite all files in
that installation with the files included in this (PatchNets) repo.
|
Thank you so much! Now I can successfully generate correct data . if (wrong_ratio > rejection_criteria_obs || bad_tri_ratio > rejection_criteria_tri) { I'm confused about it and I want to know whether the npz files were generated correctly. |
I believe I added that merely as a marker for debugging (I did not
really clean up the code before releasing it) but use them as normal
shapes everywhere. The original DeepSDF code similarly does not do
anything with the information that some point clouds are not perfectly
suitable:
https://github.com/facebookresearch/DeepSDF/blob/main/src/PreprocessMesh.cpp#L509
In the folder that I can still find, there appear to be only five shapes
in the 13 Shapenet categories that we use that are marked as "rejected".
So either way it won't make any noticeable difference.
|
Let me correct that, when loading the shapes, this line here skips npz
files that don't fit the standard (non-rejected) naming scheme:
https://github.com/edgar-tr/patchnets/blob/master/code/deep_sdf/data.py#L20
Note that the train/test/val splits are generated using the .obj files,
before PreprocessMesh.cpp is ever called:
https://github.com/edgar-tr/patchnets/blob/master/code/preprocess_data.py#L238
So the splits assume that the rejected shapes exist (although the
data.py code mentioned in the previous paragraph will skip them during
training when loading the shapes in the split). That's important because
it means that the .obj's of the rejected npz's should not be deleted. If
they are deleted, the splits will get messed up.
|
Thanks a lot! Up till now I have generated 1110 npz files used sv2_lamps_json and there is 6 rejected mesh. And I generated a part of sofa meshes (about 643 with 13 were rejected).I think it will not make a big difference? Besides ,I met a problem when I tried to train with some specific npz files(not rejected mesh) ,There is a 【 RuntimeWarning: invalid value encountered in arcsin 】because of : I guess it is because the value of rotation_matrix[2,0] is outside the range of [- 1, 1] and it cause an error: |
I'm not following. The _get_euler_angles_from_rotation_matrix can return
NaNs, yes, but they are removed immediately afterwards:
https://github.com/edgar-tr/patchnets/blob/1d20bfd1ec74f8c6687939c6092642685fdeee94/code/train_deep_sdf.py#L374
remove_nans gets only called with input "tensor"s that contain positions
or normals, but not with rotation matrices:
https://github.com/edgar-tr/patchnets/blob/1d20bfd1ec74f8c6687939c6092642685fdeee94/code/deep_sdf/data.py#L58
You could replace the ipdb line with tensor_nan =
torch.logical_or(tensor_nan, tensor_nan_fixed), I suppose. I don't
remember this part well.
|
Thank you so much! I replaced ipdb line and trained model successfully. I trained about 900 ShpenetV2obj with 300 epoch , but after reconstructing and evaluating,the result was not so good, (The average chamfer distance is as more than 1.0) I am not sure whether it will cause error. |
The reconstruct.py is not compatible with PatchNets. It's a bit hidden
in the readme, but "|useful_scripts.py| also contains code to extract
meshes." See here:
https://github.com/edgar-tr/patchnets/blob/master/useful_scripts.py#L1705
|
For fitting to test data, see
https://github.com/edgar-tr/patchnets#evaluation
|
Thank you so much! I managed to convert latent codes into meshes by using functioon visualize_parts_individually (patches),which generated .obj followed train_split.json. The function evaluate_patch_network_metrics() includs visualize_mixture()( # full object), in visualize_mixture:(Line 105) It seems that the variable [latent_code] does not declared at (latent_init=latent_code) in the else branch. How can I fix it? |
There should never be a case where the code lands in the else branch. If
you follow the readme for test-set reconstruction (see my earlier
comment), the test set will technically be treated as a training set by
the code (which is why it's important to keep the network weights fixed,
for example). Therefore, the if branch should always get chosen by the
code.
|
thanks a lot ! |
I tried to set [use_precomputed]=false but error still happened cause of :
#----------
The text was updated successfully, but these errors were encountered: