You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The code works until it's optimizing the code only. But then when it switches to "jointly optimize weights with code". It fails at optimizer.step with the below error.
Starting to jointly optimize weights with code
Traceback (most recent call last):
File "run_nerf.py", line 210, in <module>
train()
File "run_nerf.py", line 125, in train
optimizer.step()
File "/usr/local/lib/python3.7/dist-packages/torch/optim/optimizer.py", line 88, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/torch/optim/adam.py", line 118, in step
eps=group['eps'])
File "/usr/local/lib/python3.7/dist-packages/torch/optim/_functional.py", line 86, in adam
exp_avg.mul_(beta1).add_(grad, alpha=1 - beta1)
RuntimeError: The size of tensor a (163) must match the size of tensor b (256) at non-singleton dimension 1
Further the above command works when fine when we skip loading the pre-trained model by using this flag --skip_loading.
Any suggestions on how to go about solving this?
The text was updated successfully, but these errors were encountered:
Hi,
Thank you for the great work, it was interesting to read this.
When I try to run this for the example real image fitting as below.
python run_nerf.py --config configs/dosovitskiy_chairs/config.txt --real_image_dir data/real_chairs/shape00001_charlton --N_rand 512 --n_iters_real 10000 --n_iters_code_only 1000 --style_optimizer lbfgs --i_testset 1000 --i_weights 1000 --savedir real_chairs/shape00001_charlton --testskip 1
The code works until it's optimizing the code only. But then when it switches to "jointly optimize weights with code". It fails at
optimizer.step
with the below error.Further the above command works when fine when we skip loading the pre-trained model by using this flag
--skip_loading
.Any suggestions on how to go about solving this?
The text was updated successfully, but these errors were encountered: