Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimizer.step fails with RuntimeError: The size of tensor a (163) must match the size of tensor b (256) #4

Open
vishnukool opened this issue Aug 27, 2021 · 1 comment

Comments

@vishnukool
Copy link

vishnukool commented Aug 27, 2021

Hi,
Thank you for the great work, it was interesting to read this.

When I try to run this for the example real image fitting as below.

python run_nerf.py --config configs/dosovitskiy_chairs/config.txt --real_image_dir data/real_chairs/shape00001_charlton --N_rand 512 --n_iters_real 10000 --n_iters_code_only 1000 --style_optimizer lbfgs --i_testset 1000 --i_weights 1000 --savedir real_chairs/shape00001_charlton --testskip 1

The code works until it's optimizing the code only. But then when it switches to "jointly optimize weights with code". It fails at optimizer.step with the below error.

Starting to jointly optimize weights with code
Traceback (most recent call last):
  File "run_nerf.py", line 210, in <module>
    train()
  File "run_nerf.py", line 125, in train
    optimizer.step()
  File "/usr/local/lib/python3.7/dist-packages/torch/optim/optimizer.py", line 88, in wrapper
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.7/dist-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.7/dist-packages/torch/optim/adam.py", line 118, in step
    eps=group['eps'])
  File "/usr/local/lib/python3.7/dist-packages/torch/optim/_functional.py", line 86, in adam
    exp_avg.mul_(beta1).add_(grad, alpha=1 - beta1)
RuntimeError: The size of tensor a (163) must match the size of tensor b (256) at non-singleton dimension 1

Further the above command works when fine when we skip loading the pre-trained model by using this flag --skip_loading.

Any suggestions on how to go about solving this?

@YZsZY
Copy link

YZsZY commented Nov 3, 2022

Hello, I meet a same problem and would like to ask you how to solve this problem. Thanks a lot!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants