Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Very sparse voxels for input. #4

Open
LR32768 opened this issue Sep 2, 2019 · 1 comment
Open

Very sparse voxels for input. #4

LR32768 opened this issue Sep 2, 2019 · 1 comment

Comments

@LR32768
Copy link

LR32768 commented Sep 2, 2019

Hi I am very interested in your work and I'm trying to retrain the chair parsing network. But after following all the instructions as README and using same arguments, my converged model behaves much worse on Shapenet v2. The visualization for the input X (voxels) fed into model distribution is like this
sparse
while the ground truth mesh is like this
Screenshot from 2019-09-02 15-28-13

Is this input sparsity intended or there is some version problem when releasing? I read the dataloader and find that such phenomenon comes from the occupancy grid voxelizer which only take mesh vertices into voxelize, is this as designed?

@shubham-goel
Copy link

I'm also facing this issue. Any update on this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants