Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: got 4 channels instead #9

Closed
galoisgroupcn opened this issue Jun 28, 2019 · 3 comments
Closed

RuntimeError: got 4 channels instead #9

galoisgroupcn opened this issue Jun 28, 2019 · 3 comments

Comments

@galoisgroupcn
Copy link

Hi!

I followed your instruction step by step, and after typing

!python3 transfer.py --option_unpool cat5 -a --content ./examples/content --style ./examples/style --content_segment ./examples/content_segment --style_segment ./examples/style_segment/ --output ./outputs/ --verbose --image_size 512

I got

Namespace(alpha=1, content='./examples/content', content_segment='./examples/content_segment', cpu=False, image_size=512, option_unpool='cat5', output='./outputs/', style='./examples/style', style_segment='./examples/style_segment/', transfer_all=True, transfer_at_decoder=False, transfer_at_encoder=False, transfer_at_skip=False, verbose=True)
0% 0/1 [00:00<?, ?it/s]------ transfer: 1.png
Elapsed time in whole WCT: 0:00:02.926612

Traceback (most recent call last):
File "transfer.py", line 205, in
run_bulk(config)
File "transfer.py", line 175, in run_bulk
img = wct2.transfer(content, style, content_segment, style_segment, alpha=config.alpha)
File "transfer.py", line 79, in transfer
style_feats, style_skips = self.get_all_feature(style)
File "transfer.py", line 64, in get_all_feature
x = self.encode(x, skips, level)
File "transfer.py", line 55, in encode
return self.encoder.encode(x, skips, level)
File "/content/drive/WCT2/model.py", line 163, in encode
out = self.conv0(x)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py", line 338, in forward
self.padding, self.dilation, self.groups)
RuntimeError: Given groups=1, weight of size 3 3 1 1, expected input[1, 4, 512, 512] to have 3 channels, but got 4 channels instead

Could you help me please? Thank you!

Sincerely,

Amber
@jaejun-yoo

@jaejun-yoo
Copy link
Collaborator

Hi, based on the error message you've got, I think your input image file is PNG having alpha channel in addition to RGB so that the input has 4 channels. Because the model is trained on RGB three channels, it pops up the dimension mismatch error. Please try with JPG input having RGB only. (You can do this simply by using commonly used image processing packages)

@jaejun-yoo
Copy link
Collaborator

jaejun-yoo commented Jul 1, 2019 via email

@amberjxd
Copy link

amberjxd commented Jul 2, 2019

@jaejun-yoo
Hi Jaejun,

So far, the problem is solved. Thank you!

All the best,

Amber

This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants