Skip to content
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.

Improving reconstruciton quality #94

Closed
demacdo opened this issue Mar 19, 2022 · 2 comments
Closed

Improving reconstruciton quality #94

demacdo opened this issue Mar 19, 2022 · 2 comments

Comments

@demacdo
Copy link

demacdo commented Mar 19, 2022

Hi there, I had a few questions about using this repo. I'm trying to overfit the network to a single mesh, training and testing on the same .npz file, but I'm getting poor reconstruction quality. The network is learning something, but only very low frequency information. I'm not sure if the issue is my input data, training strategy, or reconstruction.

Are there any tips for fitting this network on a single mesh?

Things I've done:

  • Tried various preprocessing strategies, including higher and lower surface variance, more points, etc, with little to no improvement.
  • I'm basing my specs.json file on the example folders, changing to the appropriate splits files and the ScenesPerBatch variable to 1.
  • I've tried changing ClampingDistance, CodeRegularizationLambda, CodeLength, and various LearningRateSchedule params, etc, with slight improvements here and there, but still poor reconstruction.
  • Tried increasing epochs up to 50 000
  • Tried changing various reconstruction params (reconstruction.py lines 254--262), little help

Questions:

  • When using the plot_log.py script, what magnitude of loss typically indicates a decent fit? My losses typically plateau somewhere < 0.005, sometimes ~= 0.0025, but not sure if this is "good enough". Beyond this point, losses don't seem to significantly decrease with longer training.
  • During reconstruction, the 0-contour is sometimes outside the range of the data (especially as I train longer), but not usually during the first few hundred epochs

I've attached a plot of loss, and screenshot of the training data (sampled using standard params) and the zero surface (at epoch 500, 1000, 2000). Note, learning rates are an order of magnitude smaller than examples, but it gets to the same place with standard learning rates.

e500

e1000

e2000

plot_loss

@demacdo
Copy link
Author

demacdo commented Mar 21, 2022

To follow up: I got the network learning for a single mesh using the default parameters, but I had to create a number of resampled point clouds and train on the larger batch of points clouds instead of just one. I'm not sure if this was a batch size issue or if there was simply not enough information in 1 resampled point cloud to properly learn and reconstruct the mesh. After training on the larger batch, my losses were in the ballpark of 0.0025-0.005 with decent reconstruction.

@demacdo demacdo closed this as completed Mar 21, 2022
@submagr
Copy link

submagr commented Nov 22, 2022

Hello @demacdo,
Did you write your own code for training on a single mesh without the latent vector z? Can you please elaborate your procedure?

Thanks!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants