Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Encoder + MLP combo #2063

Merged
merged 8 commits into from
Dec 1, 2023
Merged

Encoder + MLP combo #2063

merged 8 commits into from
Dec 1, 2023

Conversation

ethanweber
Copy link
Collaborator

@ethanweber ethanweber commented Jun 12, 2023

In PR #1936, I removed the functionality of tcnn.NetworkWithInputEncoding and used torch.nn.Sequential instead. Here I revert that with a new class called EncoderAndMLP. This should be faster when using tccn with nerfacto. See "combo" below where I show some runs, though it doesn't show a significant (if any) improvement.

Screenshot 2023-06-11 at 11 05 14 PM Screenshot 2023-06-11 at 11 05 19 PM

@@ -102,38 +103,41 @@ def __init__(
self.build_nn_modules()
elif implementation == "tcnn" and not TCNN_EXISTS:
print_tcnn_speed_warning("MLP")
self.build_nn_modules()
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I fixed this in this PR too!

@maturk
Copy link
Collaborator

maturk commented Jun 12, 2023

@ethanweber How come this way its faster?

"num_layers": num_layers,
"layer_width": hidden_dim,
"out_dim": 1 + self.geo_feat_dim,
"activation": nn.ReLU(),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it seems like this should be passing nn.ReLU instead of an instance, is there a reason it needs to call the constructor here?

@ethanweber
Copy link
Collaborator Author

ethanweber commented Aug 19, 2023

I cleaned up the code after seeing issue #2348. I tested with the dozer scene from the Nerfstudio Dataset. Running nerfacto, I get these memories with the combined MLP and hash encoder vs. sequential.

  • 2901MB with combo
  • 2937MB with torch sequential

@tancik, I went with your idea of combining the encoder and MLP into a Python class with typed arguments.

@brentyi brentyi enabled auto-merge (squash) December 1, 2023 03:10
@brentyi brentyi merged commit 4c627ed into main Dec 1, 2023
4 checks passed
@brentyi brentyi deleted the ethan/encoder_and_mlp_combo branch December 1, 2023 03:16
@yurkovak
Copy link
Contributor

yurkovak commented Dec 3, 2023

Nice that it's supposed to be faster! Just note that it breaks older checkpoints, and the default unstrict loading causes silent skipping of parts of the model, leading to poor renders that are hard to explain.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants