Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training Time and Dense Point Cloud Quality #26

Open
jonstephens85 opened this issue Feb 20, 2024 · 4 comments
Open

Training Time and Dense Point Cloud Quality #26

jonstephens85 opened this issue Feb 20, 2024 · 4 comments

Comments

@jonstephens85
Copy link

Great project everyone!

I had a two questions regarding training and the dense point cloud.

Training Time
I noticed after 100 epochs, the training speed is going at about 25% the speed it was going before the 100th epoch. I was seeing around 60 e/s and now its around 15 e/s. I also noticed my VRAM usage went from 17 GB to 24 GB at that point. Should it be maxing out my VRAM?

Also, this is an outdoor scene, should I have used --PipelineParams.enable_environment_map true? Not sure what "extensive" means when referring to a scene.

Point Cloud Quality
The Dense point cloud I got from Colmap is 9.7 million points. There isn't too much noise in the output, however, it did project a lot of points below ground. For reference, I filmed 3 loops around a statue. There are a lot of points under the statue that are underground. Would it speed up processing if I had cleaned up the noisy data below surface?

@jonstephens85
Copy link
Author

jonstephens85 commented Feb 20, 2024

Update, I got to epoch 263 and ran out of VRAM. I am running an RTX 3090ti.

I took a look at the sample datasets and noticed the images were 1920x1080 jpgs. I used the same sized images but realized I had png images that were 5x bigger file sizes. Not sure if that had something to do with it.

@iFimo
Copy link

iFimo commented Feb 21, 2024

I noticed after 100 epochs, the training speed is going at about 25% the speed it was going before the 100th epoch. I was seeing around 60 e/s and now its around 15 e/s. I also noticed my VRAM usage went from 17 GB to 24 GB at that point. Should it be maxing out my VRAM?

I can briefly confirm to you that it is exactly the same for me. I did a small training yesterday evening with 14 pictures. It took about 40 minutes on my 3080.
Here, too, after the first few epochs it went up to 24 GB of VRAM. Just like yours.
The images I tested were 2155x1094 JPG, each about 1,3 mb.

You have probably already tested a smaller data set. Try JPG, it won't hurt and it could save you the crucial resources.

Also, this is an outdoor scene, should I have used --PipelineParams.enable_environment_map true? Not sure what "extensive" means when referring to a scene.

Unfortunately I can't say anything about the command itself.


I'm now installing Trips on a second PC with 2x 4090 and will then start my 400 image set there. Hope I don't run into the same problems. I keep you updated.

@lfranke
Copy link
Owner

lfranke commented Feb 21, 2024

Hi, thanks!

I noticed after 100 epochs, the training speed is going at about 25% the speed it was going before the 100th epoch. I was seeing around 60 e/s and now its around 15 e/s. I also noticed my VRAM usage went from 17 GB to 24 GB at that point. Should it be maxing out my VRAM?

This is by design. After 100 epochs, we add the VGG loss to the mix. This loss is quite computational expensive as well as requires significant VRAM. We achieved best results with it, however you can change this by using --TrainParams.only_start_vgg_after_epochs 100 with an epoch later/earlier than 100.

The Dense point cloud I got from Colmap is 9.7 million points. There isn't too much noise in the output, however, it did project a lot of points below ground. For reference, I filmed 3 loops around a statue. There are a lot of points under the statue that are underground. Would it speed up processing if I had cleaned up the noisy data below surface?

The point cloud size is similar to our examples and if there are no extreme areas missing it should be ok. Outlier points are automatically made transparent, so that should not be a big issue. For removing the underground: I would expect it to speed up the training and rendering, but most likely not by very much. So I'm not sure if it is worth the effort.

Also, this is an outdoor scene, should I have used --PipelineParams.enable_environment_map true? Not sure what "extensive" means when referring to a scene.

In most cases, COLMAP point clouds do very well in outdoor scenes. For far away objects, some points are created which can be used by TRIPS. I would start of by not using the environment map and add it if the rendering result for backgrounds is bad. In general, background reconstruction in our method is not perfect and the environment map is more a band-aid fix.

@jonstephens85
Copy link
Author

@lfranke thank you for the update and confirming that I didn't run into a bug!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants