You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi I am very interested in your work and I'm trying to retrain the chair parsing network. But after following all the instructions as README and using same arguments, my converged model behaves much worse on Shapenet v2. The visualization for the input X (voxels) fed into model distribution is like this
while the ground truth mesh is like this
Is this input sparsity intended or there is some version problem when releasing? I read the dataloader and find that such phenomenon comes from the occupancy grid voxelizer which only take mesh vertices into voxelize, is this as designed?
The text was updated successfully, but these errors were encountered:
Hi I am very interested in your work and I'm trying to retrain the chair parsing network. But after following all the instructions as README and using same arguments, my converged model behaves much worse on Shapenet v2. The visualization for the input X (voxels) fed into model distribution is like this
![sparse](https://user-images.githubusercontent.com/22662429/64134796-ca0aba80-cd96-11e9-94cf-f36084881f26.png)
![Screenshot from 2019-09-02 15-28-13](https://user-images.githubusercontent.com/22662429/64134802-dabb3080-cd96-11e9-89ab-f7621381dbad.png)
while the ground truth mesh is like this
Is this input sparsity intended or there is some version problem when releasing? I read the dataloader and find that such phenomenon comes from the occupancy grid voxelizer which only take mesh vertices into voxelize, is this as designed?
The text was updated successfully, but these errors were encountered: