Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tensor shape mismatch on "space" trained models - test_neural_texture.py #3

Closed
madhawav opened this issue Jun 30, 2020 · 1 comment
Closed

Comments

@madhawav
Copy link
Contributor

Hi,

Whenever I run the script "test_neural_texture.py", it fails on trained_models that has the term "space" in the filename (such as ../trained_models/neural_texture/version_468753_neuraltexture_rust_paint_2d_space/). Thus, I had to remove those directories from the trained_model directory to completely run the script. I believe these "space" models synthesize textures that encode style statistics from multiple input images.

Log output

Use pytorch 1.4.0
Load config: ../trained_models/neural_texture/version_468753_neuraltexture_rust_paint_2d_space/logs/config.txt
INFO:lightning:GPU available: True, used: True
INFO:lightning:CUDA_VISIBLE_DEVICES: [0]
checkpoint loaded ../trained_models/neural_texture/version_468753_neuraltexture_rust_paint_2d_space/checkpoints/neural_texture_ckpt_epoch_1.ckpt
[PATH TO CONDA ENV]/lib/python3.8/site-packages/pytorch_lightning/utilities/distributed.py:23: UserWarning: Checkpoint directory ../trained_models/neural_texture/version_468753_neuraltexture_rust_paint_2d_space/checkpoints exists and is not empty with save_top_k != 0.All files in this directory will be deleted when a checkpoint is saved!
  warnings.warn(*args, **kwargs)
Testing:  33%|████████████                        | 1/3 [00:01<00:02,  1.49s/it]Traceback (most recent call last):
  File "[PROJECT ROOT]/code/systems/s_neural_texture.py", line 261, in test_step
    image_out_inter = self.forward(z_texture_interpolated, position, seed)
  File "[PROJECT ROOT]/code/systems/s_neural_texture.py", line 66, in forward
    transform_coeff, z_encoding = torch.split(weights, [self.p.texture.t, self.p.texture.e], dim=1)
  File "[PATH TO CONDA ENV]/lib/python3.8/site-packages/torch/functional.py", line 77, in split
    return tensor.split(split_size_or_sections, dim)
  File "[PATH TO CONDA ENV]/lib/python3.8/site-packages/torch/tensor.py", line 377, in split
    return super(Tensor, self).split_with_sizes(split_size, dim)
RuntimeError: start (32) + length (64) exceeds dimension size (94).

I wonder what I should do to overcome this issue.

P.S.: I am using the pre-trained models and test images provided by you. No train images are placed.

@madhawav
Copy link
Contributor Author

Sorry, this looks like a duplicate of #1.
I have implemented the solution suggested in #1 and sent a new pull request #4.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant