You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.
Hi there, I had a few questions about using this repo. I'm trying to overfit the network to a single mesh, training and testing on the same .npz file, but I'm getting poor reconstruction quality. The network is learning something, but only very low frequency information. I'm not sure if the issue is my input data, training strategy, or reconstruction.
Are there any tips for fitting this network on a single mesh?
Things I've done:
Tried various preprocessing strategies, including higher and lower surface variance, more points, etc, with little to no improvement.
I'm basing my specs.json file on the example folders, changing to the appropriate splits files and the ScenesPerBatch variable to 1.
I've tried changing ClampingDistance, CodeRegularizationLambda, CodeLength, and various LearningRateSchedule params, etc, with slight improvements here and there, but still poor reconstruction.
Tried increasing epochs up to 50 000
Tried changing various reconstruction params (reconstruction.py lines 254--262), little help
Questions:
When using the plot_log.py script, what magnitude of loss typically indicates a decent fit? My losses typically plateau somewhere < 0.005, sometimes ~= 0.0025, but not sure if this is "good enough". Beyond this point, losses don't seem to significantly decrease with longer training.
During reconstruction, the 0-contour is sometimes outside the range of the data (especially as I train longer), but not usually during the first few hundred epochs
I've attached a plot of loss, and screenshot of the training data (sampled using standard params) and the zero surface (at epoch 500, 1000, 2000). Note, learning rates are an order of magnitude smaller than examples, but it gets to the same place with standard learning rates.
The text was updated successfully, but these errors were encountered:
To follow up: I got the network learning for a single mesh using the default parameters, but I had to create a number of resampled point clouds and train on the larger batch of points clouds instead of just one. I'm not sure if this was a batch size issue or if there was simply not enough information in 1 resampled point cloud to properly learn and reconstruct the mesh. After training on the larger batch, my losses were in the ballpark of 0.0025-0.005 with decent reconstruction.
Hi there, I had a few questions about using this repo. I'm trying to overfit the network to a single mesh, training and testing on the same .npz file, but I'm getting poor reconstruction quality. The network is learning something, but only very low frequency information. I'm not sure if the issue is my input data, training strategy, or reconstruction.
Are there any tips for fitting this network on a single mesh?
Things I've done:
specs.json
file on the example folders, changing to the appropriate splits files and the ScenesPerBatch variable to 1.reconstruction.py
lines 254--262), little helpQuestions:
plot_log.py
script, what magnitude of loss typically indicates a decent fit? My losses typically plateau somewhere < 0.005, sometimes ~= 0.0025, but not sure if this is "good enough". Beyond this point, losses don't seem to significantly decrease with longer training.I've attached a plot of loss, and screenshot of the training data (sampled using standard params) and the zero surface (at epoch 500, 1000, 2000). Note, learning rates are an order of magnitude smaller than examples, but it gets to the same place with standard learning rates.
The text was updated successfully, but these errors were encountered: