Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Has colorization been removed or only disabled? #36

Closed
Ghee36 opened this issue Aug 8, 2021 · 10 comments
Closed

Has colorization been removed or only disabled? #36

Ghee36 opened this issue Aug 8, 2021 · 10 comments

Comments

@Ghee36
Copy link

Ghee36 commented Aug 8, 2021

Generative facial prior is genius. Wonderful research

Was colorization removed or just disabled by feature? I was unable to locate color jitter or grey settings.

@xinntao
Copy link
Member

xinntao commented Aug 8, 2021

  1. For inference, you can find the original model with colorization in readme.md
  2. For training, the option file train_gfpgan_v1.yml contains colorization settings.

@Ghee36
Copy link
Author

Ghee36 commented Aug 8, 2021

I am not experienced so I made what attempt I could. I changed references of the current file with "GFPGANv1.pth"; while I am unknowledgeable with training and am only trying to output images both color and B&W to test the accuracy.

Connecting to github-releases.githubusercontent.com (github-releases.githubusercontent.com)|185.199.108.154|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 615378983 (587M) [application/octet-stream]
Saving to: ‘experiments/pretrained_models/GFPGANv1.pth’

GFPGANv1.pth 100%[===================>] 586.87M 24.3MB/s in 26s

2021-08-08 09:40:07 (22.1 MB/s) - ‘experiments/pretrained_models/GFPGANv1.pth’ saved [615378983/615378983]

Then Inference fails

Traceback (most recent call last):
File "inference_gfpgan_full.py", line 128, in
gfpgan.load_state_dict(torch.load(args.model_path, map_location=lambda storage, loc: storage)['params_ema'])
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1407, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for GFPGANv1Clean:
Missing key(s) in state_dict: "conv_body_first.weight", "conv_body_first.bias", "conv_body_down.0.conv1.weight", "conv_body_down.0.conv1.bias", "conv_body_down.0.conv2.weight", "conv_body_down.0.conv2.bias", "conv_body_down.0.skip.weight", "conv_body_down.1.conv1.weight", "conv_body_down.1.conv1.bias", "conv_body_down.1.conv2.weight", "conv_body_down.1.conv2.bias", "conv_body_down.1.skip.weight", "conv_body_down.2.conv1.weight", "conv_body_down.2.conv1.bias", "conv_body_down.2.conv2.weight", "conv_body_down.2.conv2.bias", "conv_body_down.2.skip.weight", "conv_body_down.3.conv1.weight", "conv_body_down.3.conv1.bias", "conv_body_down.3.conv2.weight", "conv_body_down.3.conv2.bias", "conv_body_down.3.skip.weight", "conv_body_down.4.conv1.weight", "conv_body_down.4.conv1.bias", "conv_body_down.4.conv2.weight", "conv_body_down.4.conv2.bias", "conv_body_down.4.skip.weight", "conv_body_down.5.conv1.weight", "conv_body_down.5.conv1.bias", "conv_body_down.5.conv2.weight", "conv_body_down.5.conv2.bias", "conv_body_down.5.skip.weight", "conv_body_down.6.conv1.weight", "conv_body_down.6.conv1.bias", "conv_body_down.6.conv2.weight", "conv_body_down.6.conv2.bias", "conv_body_down.6.skip.weight", "final_conv.weight", "final_conv.bias", "conv_body_up.0.conv1.weight", "conv_body_up.0.conv1.bias", "conv_body_up.0.conv2.bias", "conv_body_up.1.conv1.weight", "conv_body_up.1.conv1.bias", "conv_body_up.1.conv2.bias", "conv_body_up.2.conv1.weight", "conv_body_up.2.conv1.bias", "conv_body_up.2.conv2.bias", "conv_body_up.3.conv1.weight", "conv_body_up.3.conv1.bias", "conv_body_up.3.conv2.bias", "conv_body_up.4.conv1.weight", "conv_body_up.4.conv1.bias", "conv_body_up.4.conv2.bias", "conv_body_up.5.conv1.weight", "conv_body_up.5.conv1.bias", "conv_body_up.5.conv2.bias", "conv_body_up.6.conv1.weight", "conv_body_up.6.conv1.bias", "conv_body_up.6.conv2.bias", "stylegan_decoder.style_mlp.9.weight", "stylegan_decoder.style_mlp.9.bias", "stylegan_decoder.style_mlp.11.weight", "stylegan_decoder.style_mlp.11.bias", "stylegan_decoder.style_mlp.13.weight", "stylegan_decoder.style_mlp.13.bias", "stylegan_decoder.style_mlp.15.weight", "stylegan_decoder.style_mlp.15.bias", "stylegan_decoder.style_conv1.bias", "stylegan_decoder.style_convs.0.bias", "stylegan_decoder.style_convs.1.bias", "stylegan_decoder.style_convs.2.bias", "stylegan_decoder.style_convs.3.bias", "stylegan_decoder.style_convs.4.bias", "stylegan_decoder.style_convs.5.bias", "stylegan_decoder.style_convs.6.bias", "stylegan_decoder.style_convs.7.bias", "stylegan_decoder.style_convs.8.bias", "stylegan_decoder.style_convs.9.bias", "stylegan_decoder.style_convs.10.bias", "stylegan_decoder.style_convs.11.bias", "stylegan_decoder.style_convs.12.bias", "stylegan_decoder.style_convs.13.bias".
Unexpected key(s) in state_dict: "conv_body_first.0.weight", "conv_body_first.1.bias", "conv_body_down.0.conv1.0.weight", "conv_body_down.0.conv1.1.bias", "conv_body_down.0.conv2.1.weight", "conv_body_down.0.conv2.2.bias", "conv_body_down.0.skip.1.weight", "conv_body_down.1.conv1.0.weight", "conv_body_down.1.conv1.1.bias", "conv_body_down.1.conv2.1.weight", "conv_body_down.1.conv2.2.bias", "conv_body_down.1.skip.1.weight", "conv_body_down.2.conv1.0.weight", "conv_body_down.2.conv1.1.bias", "conv_body_down.2.conv2.1.weight", "conv_body_down.2.conv2.2.bias", "conv_body_down.2.skip.1.weight", "conv_body_down.3.conv1.0.weight", "conv_body_down.3.conv1.1.bias", "conv_body_down.3.conv2.1.weight", "conv_body_down.3.conv2.2.bias", "conv_body_down.3.skip.1.weight", "conv_body_down.4.conv1.0.weight", "conv_body_down.4.conv1.1.bias", "conv_body_down.4.conv2.1.weight", "conv_body_down.4.conv2.2.bias", "conv_body_down.4.skip.1.weight", "conv_body_down.5.conv1.0.weight", "conv_body_down.5.conv1.1.bias", "conv_body_down.5.conv2.1.weight", "conv_body_down.5.conv2.2.bias", "conv_body_down.5.skip.1.weight", "conv_body_down.6.conv1.0.weight", "conv_body_down.6.conv1.1.bias", "conv_body_down.6.conv2.1.weight", "conv_body_down.6.conv2.2.bias", "conv_body_down.6.skip.1.weight", "final_conv.0.weight", "final_conv.1.bias", "conv_body_up.0.conv1.0.weight", "conv_body_up.0.conv1.1.bias", "conv_body_up.0.conv2.activation.bias", "conv_body_up.1.conv1.0.weight", "conv_body_up.1.conv1.1.bias", "conv_body_up.1.conv2.activation.bias", "conv_body_up.2.conv1.0.weight", "conv_body_up.2.conv1.1.bias", "conv_body_up.2.conv2.activation.bias", "conv_body_up.3.conv1.0.weight", "conv_body_up.3.conv1.1.bias", "conv_body_up.3.conv2.activation.bias", "conv_body_up.4.conv1.0.weight", "conv_body_up.4.conv1.1.bias", "conv_body_up.4.conv2.activation.bias", "conv_body_up.5.conv1.0.weight", "conv_body_up.5.conv1.1.bias", "conv_body_up.5.conv2.activation.bias", "conv_body_up.6.conv1.0.weight", "conv_body_up.6.conv1.1.bias", "conv_body_up.6.conv2.activation.bias", "stylegan_decoder.style_mlp.2.weight", "stylegan_decoder.style_mlp.2.bias", "stylegan_decoder.style_mlp.4.weight", "stylegan_decoder.style_mlp.4.bias", "stylegan_decoder.style_mlp.6.weight", "stylegan_decoder.style_mlp.6.bias", "stylegan_decoder.style_mlp.8.weight", "stylegan_decoder.style_mlp.8.bias", "stylegan_decoder.style_conv1.activate.bias", "stylegan_decoder.style_convs.0.activate.bias", "stylegan_decoder.style_convs.1.activate.bias", "stylegan_decoder.style_convs.2.activate.bias", "stylegan_decoder.style_convs.3.activate.bias", "stylegan_decoder.style_convs.4.activate.bias", "stylegan_decoder.style_convs.5.activate.bias", "stylegan_decoder.style_convs.6.activate.bias", "stylegan_decoder.style_convs.7.activate.bias", "stylegan_decoder.style_convs.8.activate.bias", "stylegan_decoder.style_convs.9.activate.bias", "stylegan_decoder.style_convs.10.activate.bias", "stylegan_decoder.style_convs.11.activate.bias", "stylegan_decoder.style_convs.12.activate.bias", "stylegan_decoder.style_convs.13.activate.bias".
size mismatch for conv_body_up.3.conv2.weight: copying a param with shape torch.Size([128, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for conv_body_up.3.skip.weight: copying a param with shape torch.Size([128, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 256, 1, 1]).
size mismatch for conv_body_up.4.conv2.weight: copying a param with shape torch.Size([64, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 256, 3, 3]).
size mismatch for conv_body_up.4.skip.weight: copying a param with shape torch.Size([64, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([128, 256, 1, 1]).
size mismatch for conv_body_up.5.conv2.weight: copying a param with shape torch.Size([32, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 128, 3, 3]).
size mismatch for conv_body_up.5.skip.weight: copying a param with shape torch.Size([32, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 128, 1, 1]).
size mismatch for conv_body_up.6.conv2.weight: copying a param with shape torch.Size([16, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 64, 3, 3]).
size mismatch for conv_body_up.6.skip.weight: copying a param with shape torch.Size([16, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([32, 64, 1, 1]).
size mismatch for toRGB.3.weight: copying a param with shape torch.Size([3, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([3, 256, 1, 1]).
size mismatch for toRGB.4.weight: copying a param with shape torch.Size([3, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([3, 128, 1, 1]).
size mismatch for toRGB.5.weight: copying a param with shape torch.Size([3, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([3, 64, 1, 1]).
size mismatch for toRGB.6.weight: copying a param with shape torch.Size([3, 16, 1, 1]) from checkpoint, the shape in current model is torch.Size([3, 32, 1, 1]).
size mismatch for stylegan_decoder.style_convs.6.modulated_conv.weight: copying a param with shape torch.Size([1, 256, 512, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 512, 512, 3, 3]).
size mismatch for stylegan_decoder.style_convs.7.modulated_conv.weight: copying a param with shape torch.Size([1, 256, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 512, 512, 3, 3]).
size mismatch for stylegan_decoder.style_convs.7.modulated_conv.modulation.weight: copying a param with shape torch.Size([256, 512]) from checkpoint, the shape in current model is torch.Size([512, 512]).
size mismatch for stylegan_decoder.style_convs.7.modulated_conv.modulation.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for stylegan_decoder.style_convs.8.modulated_conv.weight: copying a param with shape torch.Size([1, 128, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 256, 512, 3, 3]).
size mismatch for stylegan_decoder.style_convs.8.modulated_conv.modulation.weight: copying a param with shape torch.Size([256, 512]) from checkpoint, the shape in current model is torch.Size([512, 512]).
size mismatch for stylegan_decoder.style_convs.8.modulated_conv.modulation.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for stylegan_decoder.style_convs.9.modulated_conv.weight: copying a param with shape torch.Size([1, 128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 256, 256, 3, 3]).
size mismatch for stylegan_decoder.style_convs.9.modulated_conv.modulation.weight: copying a param with shape torch.Size([128, 512]) from checkpoint, the shape in current model is torch.Size([256, 512]).
size mismatch for stylegan_decoder.style_convs.9.modulated_conv.modulation.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for stylegan_decoder.style_convs.10.modulated_conv.weight: copying a param with shape torch.Size([1, 64, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 128, 256, 3, 3]).
size mismatch for stylegan_decoder.style_convs.10.modulated_conv.modulation.weight: copying a param with shape torch.Size([128, 512]) from checkpoint, the shape in current model is torch.Size([256, 512]).
size mismatch for stylegan_decoder.style_convs.10.modulated_conv.modulation.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for stylegan_decoder.style_convs.11.modulated_conv.weight: copying a param with shape torch.Size([1, 64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 128, 128, 3, 3]).
size mismatch for stylegan_decoder.style_convs.11.modulated_conv.modulation.weight: copying a param with shape torch.Size([64, 512]) from checkpoint, the shape in current model is torch.Size([128, 512]).
size mismatch for stylegan_decoder.style_convs.11.modulated_conv.modulation.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for stylegan_decoder.style_convs.12.modulated_conv.weight: copying a param with shape torch.Size([1, 32, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 64, 128, 3, 3]).
size mismatch for stylegan_decoder.style_convs.12.modulated_conv.modulation.weight: copying a param with shape torch.Size([64, 512]) from checkpoint, the shape in current model is torch.Size([128, 512]).
size mismatch for stylegan_decoder.style_convs.12.modulated_conv.modulation.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for stylegan_decoder.style_convs.13.modulated_conv.weight: copying a param with shape torch.Size([1, 32, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 64, 64, 3, 3]).
size mismatch for stylegan_decoder.style_convs.13.modulated_conv.modulation.weight: copying a param with shape torch.Size([32, 512]) from checkpoint, the shape in current model is torch.Size([64, 512]).
size mismatch for stylegan_decoder.style_convs.13.modulated_conv.modulation.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for stylegan_decoder.to_rgbs.3.modulated_conv.weight: copying a param with shape torch.Size([1, 3, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 3, 512, 1, 1]).
size mismatch for stylegan_decoder.to_rgbs.3.modulated_conv.modulation.weight: copying a param with shape torch.Size([256, 512]) from checkpoint, the shape in current model is torch.Size([512, 512]).
size mismatch for stylegan_decoder.to_rgbs.3.modulated_conv.modulation.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for stylegan_decoder.to_rgbs.4.modulated_conv.weight: copying a param with shape torch.Size([1, 3, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 3, 256, 1, 1]).
size mismatch for stylegan_decoder.to_rgbs.4.modulated_conv.modulation.weight: copying a param with shape torch.Size([128, 512]) from checkpoint, the shape in current model is torch.Size([256, 512]).
size mismatch for stylegan_decoder.to_rgbs.4.modulated_conv.modulation.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for stylegan_decoder.to_rgbs.5.modulated_conv.weight: copying a param with shape torch.Size([1, 3, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 3, 128, 1, 1]).
size mismatch for stylegan_decoder.to_rgbs.5.modulated_conv.modulation.weight: copying a param with shape torch.Size([64, 512]) from checkpoint, the shape in current model is torch.Size([128, 512]).
size mismatch for stylegan_decoder.to_rgbs.5.modulated_conv.modulation.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for stylegan_decoder.to_rgbs.6.modulated_conv.weight: copying a param with shape torch.Size([1, 3, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 3, 64, 1, 1]).
size mismatch for stylegan_decoder.to_rgbs.6.modulated_conv.modulation.weight: copying a param with shape torch.Size([32, 512]) from checkpoint, the shape in current model is torch.Size([64, 512]).
size mismatch for stylegan_decoder.to_rgbs.6.modulated_conv.modulation.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for condition_scale.3.0.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for condition_scale.3.0.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for condition_scale.3.2.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for condition_scale.3.2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for condition_scale.4.0.weight: copying a param with shape torch.Size([64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for condition_scale.4.0.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for condition_scale.4.2.weight: copying a param with shape torch.Size([64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for condition_scale.4.2.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for condition_scale.5.0.weight: copying a param with shape torch.Size([32, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
size mismatch for condition_scale.5.0.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for condition_scale.5.2.weight: copying a param with shape torch.Size([32, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
size mismatch for condition_scale.5.2.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for condition_scale.6.0.weight: copying a param with shape torch.Size([16, 16, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for condition_scale.6.0.bias: copying a param with shape torch.Size([16]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for condition_scale.6.2.weight: copying a param with shape torch.Size([16, 16, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for condition_scale.6.2.bias: copying a param with shape torch.Size([16]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for condition_shift.3.0.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for condition_shift.3.0.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for condition_shift.3.2.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for condition_shift.3.2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for condition_shift.4.0.weight: copying a param with shape torch.Size([64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for condition_shift.4.0.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for condition_shift.4.2.weight: copying a param with shape torch.Size([64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for condition_shift.4.2.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for condition_shift.5.0.weight: copying a param with shape torch.Size([32, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
size mismatch for condition_shift.5.0.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for condition_shift.5.2.weight: copying a param with shape torch.Size([32, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
size mismatch for condition_shift.5.2.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for condition_shift.6.0.weight: copying a param with shape torch.Size([16, 16, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for condition_shift.6.0.bias: copying a param with shape torch.Size([16]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for condition_shift.6.2.weight: copying a param with shape torch.Size([16, 16, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for condition_shift.6.2.bias: copying a param with shape torch.Size([16]) from checkpoint, the shape in current model is torch.Size([32]).
ls: cannot access 'results/cmp': No such file or directory

@xinntao
Copy link
Member

xinntao commented Aug 8, 2021

If you want to use the original model with colorization, please see: https://github.com/TencentARC/GFPGAN/blob/master/PaperModel.md

@Ghee36
Copy link
Author

Ghee36 commented Aug 8, 2021

I chose option 2 on both and added the code to the first cell.

File "", line 12
BASICSR_EXT=True pip install basicsr -vvv
^
SyntaxError: invalid syntax

@Ghee36
Copy link
Author

Ghee36 commented Aug 8, 2021

Inference:

File "", line 3
BASICSR_JIT=True python inference_gfpgan_full.py --model_path experiments/pretrained_models/GFPGANv1.pth --test_path inputs/whole_imgs --save_root results --arch original --channel 1
^
SyntaxError: invalid syntax

@woctezuma
Copy link

woctezuma commented Aug 8, 2021

Please create other "Github issues" for other issues.

As for the colorization, reply was above: there are 2 models now, one with and one without colorization.
cf. https://github.com/TencentARC/GFPGAN/releases/tag/v0.2.0

@Ghee36
Copy link
Author

Ghee36 commented Aug 8, 2021

Apologies, those following issues correlate and result from an inexperienced attempt to follow the coders instructions. Total lack of Python experience or knowledge to copy/paste in the correct areas knowingly.

@xinntao
Copy link
Member

xinntao commented Aug 9, 2021

@Ghee36 If you want to use the original model, please see the instructions: https://github.com/TencentARC/GFPGAN/blob/master/PaperModel.md for installation and inference.

As for the SyntaxError: invalid syntax, it seems that it has some unsupported (invisible) characters.

@Ghee36
Copy link
Author

Ghee36 commented Aug 10, 2021

I mirrored the changes from the original model to the current notebook.


Clone GFPGAN and enter the GFPGAN folder
%cd /content
!rm -rf GFPGAN
!git clone https://github.com/TencentARC/GFPGAN.git
%cd GFPGAN

Set up the environment
Install basicsr - https://github.com/xinntao/BasicSR
Set BASICSR_EXT=True to compile the cuda extensions in the BasicSR - It may take several minutes to compile, please be patient.
!BASICSR_EXT=True pip install basicsr
Install facexlib - https://github.com/xinntao/facexlib
We use face detection and face restoration helper in the facexlib package
!pip install facexlib
Install other depencencies
!pip install -r requirements.txt
!python setup.py develop
!pip install realesrgan # used for enhancing the background (non-face) regions
Download the pre-trained model
!wget https://github.com/TencentARC/GFPGAN/releases/download/v0.1.0/GFPGANv1.pth -P experiments/pretrained_models

Works good no problems

Now we use the GFPGAN to restore the above low-quality images
We use Real-ESRGAN for enhancing the background (non-face) regions
!rm -rf results
!python inference_gfpgan.py --upscale 2 --test_path inputs/upload --save_root results --model_path experiments/pretrained_models/GFPGANv1.pth --bg_upsampler realesrgan

!ls results/cmp

This provides errors despite no longer referencing GFPGANv1Clean(is this for color?)

Downloading: "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.1/RealESRGAN_x2plus.pth" to /usr/local/lib/python3.7/dist-packages/realesrgan/weights/RealESRGAN_x2plus.pth

100% 64.0M/64.0M [00:01<00:00, 66.7MB/s]
Downloading: "https://github.com/xinntao/facexlib/releases/download/v0.1.0/detection_Resnet50_Final.pth" to /usr/local/lib/python3.7/dist-packages/facexlib/weights/detection_Resnet50_Final.pth

100% 104M/104M [00:01<00:00, 65.1MB/s]
Traceback (most recent call last):
File "inference_gfpgan.py", line 98, in
main()
File "inference_gfpgan.py", line 57, in main
bg_upsampler=bg_upsampler)
File "/content/GFPGAN/gfpgan/utils.py", line 65, in init
self.gfpgan.load_state_dict(loadnet[keyname], strict=True)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1407, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for GFPGANv1Clean:
Missing key(s) in state_dict: "conv_body_first.weight", "conv_body_first.bias", "conv_body_down.0.conv1.weight", "conv_body_down.0.conv1.bias", "conv_body_down.0.conv2.weight", "conv_body_down.0.conv2.bias", "conv_body_down.0.skip.weight", "conv_body_down.1.conv1.weight", "conv_body_down.1.conv1.bias", "conv_body_down.1.conv2.weight", "conv_body_down.1.conv2.bias", "conv_body_down.1.skip.weight", "conv_body_down.2.conv1.weight", "conv_body_down.2.conv1.bias", "conv_body_down.2.conv2.weight", "conv_body_down.2.conv2.bias", "conv_body_down.2.skip.weight", "conv_body_down.3.conv1.weight", "conv_body_down.3.conv1.bias", "conv_body_down.3.conv2.weight", "conv_body_down.3.conv2.bias", "conv_body_down.3.skip.weight", "conv_body_down.4.conv1.weight", "conv_body_down.4.conv1.bias", "conv_body_down.4.conv2.weight", "conv_body_down.4.conv2.bias", "conv_body_down.4.skip.weight", "conv_body_down.5.conv1.weight", "conv_body_down.5.conv1.bias", "conv_body_down.5.conv2.weight", "conv_body_down.5.conv2.bias", "conv_body_down.5.skip.weight", "conv_body_down.6.conv1.weight", "conv_body_down.6.conv1.bias", "conv_body_down.6.conv2.weight", "conv_body_down.6.conv2.bias", "conv_body_down.6.skip.weight", "final_conv.weight", "final_conv.bias", "conv_body_up.0.conv1.weight", "conv_body_up.0.conv1.bias", "conv_body_up.0.conv2.bias", "conv_body_up.1.conv1.weight", "conv_body_up.1.conv1.bias", "conv_body_up.1.conv2.bias", "conv_body_up.2.conv1.weight", "conv_body_up.2.conv1.bias", "conv_body_up.2.conv2.bias", "conv_body_up.3.conv1.weight", "conv_body_up.3.conv1.bias", "conv_body_up.3.conv2.bias", "conv_body_up.4.conv1.weight", "conv_body_up.4.conv1.bias", "conv_body_up.4.conv2.bias", "conv_body_up.5.conv1.weight", "conv_body_up.5.conv1.bias", "conv_body_up.5.conv2.bias", "conv_body_up.6.conv1.weight", "conv_body_up.6.conv1.bias", "conv_body_up.6.conv2.bias", "stylegan_decoder.style_mlp.9.weight", "stylegan_decoder.style_mlp.9.bias", "stylegan_decoder.style_mlp.11.weight", "stylegan_decoder.style_mlp.11.bias", "stylegan_decoder.style_mlp.13.weight", "stylegan_decoder.style_mlp.13.bias", "stylegan_decoder.style_mlp.15.weight", "stylegan_decoder.style_mlp.15.bias", "stylegan_decoder.style_conv1.bias", "stylegan_decoder.style_convs.0.bias", "stylegan_decoder.style_convs.1.bias", "stylegan_decoder.style_convs.2.bias", "stylegan_decoder.style_convs.3.bias", "stylegan_decoder.style_convs.4.bias", "stylegan_decoder.style_convs.5.bias", "stylegan_decoder.style_convs.6.bias", "stylegan_decoder.style_convs.7.bias", "stylegan_decoder.style_convs.8.bias", "stylegan_decoder.style_convs.9.bias", "stylegan_decoder.style_convs.10.bias", "stylegan_decoder.style_convs.11.bias", "stylegan_decoder.style_convs.12.bias", "stylegan_decoder.style_convs.13.bias".
Unexpected key(s) in state_dict: "conv_body_first.0.weight", "conv_body_first.1.bias", "conv_body_down.0.conv1.0.weight", "conv_body_down.0.conv1.1.bias", "conv_body_down.0.conv2.1.weight", "conv_body_down.0.conv2.2.bias", "conv_body_down.0.skip.1.weight", "conv_body_down.1.conv1.0.weight", "conv_body_down.1.conv1.1.bias", "conv_body_down.1.conv2.1.weight", "conv_body_down.1.conv2.2.bias", "conv_body_down.1.skip.1.weight", "conv_body_down.2.conv1.0.weight", "conv_body_down.2.conv1.1.bias", "conv_body_down.2.conv2.1.weight", "conv_body_down.2.conv2.2.bias", "conv_body_down.2.skip.1.weight", "conv_body_down.3.conv1.0.weight", "conv_body_down.3.conv1.1.bias", "conv_body_down.3.conv2.1.weight", "conv_body_down.3.conv2.2.bias", "conv_body_down.3.skip.1.weight", "conv_body_down.4.conv1.0.weight", "conv_body_down.4.conv1.1.bias", "conv_body_down.4.conv2.1.weight", "conv_body_down.4.conv2.2.bias", "conv_body_down.4.skip.1.weight", "conv_body_down.5.conv1.0.weight", "conv_body_down.5.conv1.1.bias", "conv_body_down.5.conv2.1.weight", "conv_body_down.5.conv2.2.bias", "conv_body_down.5.skip.1.weight", "conv_body_down.6.conv1.0.weight", "conv_body_down.6.conv1.1.bias", "conv_body_down.6.conv2.1.weight", "conv_body_down.6.conv2.2.bias", "conv_body_down.6.skip.1.weight", "final_conv.0.weight", "final_conv.1.bias", "conv_body_up.0.conv1.0.weight", "conv_body_up.0.conv1.1.bias", "conv_body_up.0.conv2.activation.bias", "conv_body_up.1.conv1.0.weight", "conv_body_up.1.conv1.1.bias", "conv_body_up.1.conv2.activation.bias", "conv_body_up.2.conv1.0.weight", "conv_body_up.2.conv1.1.bias", "conv_body_up.2.conv2.activation.bias", "conv_body_up.3.conv1.0.weight", "conv_body_up.3.conv1.1.bias", "conv_body_up.3.conv2.activation.bias", "conv_body_up.4.conv1.0.weight", "conv_body_up.4.conv1.1.bias", "conv_body_up.4.conv2.activation.bias", "conv_body_up.5.conv1.0.weight", "conv_body_up.5.conv1.1.bias", "conv_body_up.5.conv2.activation.bias", "conv_body_up.6.conv1.0.weight", "conv_body_up.6.conv1.1.bias", "conv_body_up.6.conv2.activation.bias", "stylegan_decoder.style_mlp.2.weight", "stylegan_decoder.style_mlp.2.bias", "stylegan_decoder.style_mlp.4.weight", "stylegan_decoder.style_mlp.4.bias", "stylegan_decoder.style_mlp.6.weight", "stylegan_decoder.style_mlp.6.bias", "stylegan_decoder.style_mlp.8.weight", "stylegan_decoder.style_mlp.8.bias", "stylegan_decoder.style_conv1.activate.bias", "stylegan_decoder.style_convs.0.activate.bias", "stylegan_decoder.style_convs.1.activate.bias", "stylegan_decoder.style_convs.2.activate.bias", "stylegan_decoder.style_convs.3.activate.bias", "stylegan_decoder.style_convs.4.activate.bias", "stylegan_decoder.style_convs.5.activate.bias", "stylegan_decoder.style_convs.6.activate.bias", "stylegan_decoder.style_convs.7.activate.bias", "stylegan_decoder.style_convs.8.activate.bias", "stylegan_decoder.style_convs.9.activate.bias", "stylegan_decoder.style_convs.10.activate.bias", "stylegan_decoder.style_convs.11.activate.bias", "stylegan_decoder.style_convs.12.activate.bias", "stylegan_decoder.style_convs.13.activate.bias".
size mismatch for conv_body_up.3.conv2.weight: copying a param with shape torch.Size([128, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for conv_body_up.3.skip.weight: copying a param with shape torch.Size([128, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 256, 1, 1]).
size mismatch for conv_body_up.4.conv2.weight: copying a param with shape torch.Size([64, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 256, 3, 3]).
size mismatch for conv_body_up.4.skip.weight: copying a param with shape torch.Size([64, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([128, 256, 1, 1]).
size mismatch for conv_body_up.5.conv2.weight: copying a param with shape torch.Size([32, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 128, 3, 3]).
size mismatch for conv_body_up.5.skip.weight: copying a param with shape torch.Size([32, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 128, 1, 1]).
size mismatch for conv_body_up.6.conv2.weight: copying a param with shape torch.Size([16, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 64, 3, 3]).
size mismatch for conv_body_up.6.skip.weight: copying a param with shape torch.Size([16, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([32, 64, 1, 1]).
size mismatch for toRGB.3.weight: copying a param with shape torch.Size([3, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([3, 256, 1, 1]).
size mismatch for toRGB.4.weight: copying a param with shape torch.Size([3, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([3, 128, 1, 1]).
size mismatch for toRGB.5.weight: copying a param with shape torch.Size([3, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([3, 64, 1, 1]).
size mismatch for toRGB.6.weight: copying a param with shape torch.Size([3, 16, 1, 1]) from checkpoint, the shape in current model is torch.Size([3, 32, 1, 1]).
size mismatch for stylegan_decoder.style_convs.6.modulated_conv.weight: copying a param with shape torch.Size([1, 256, 512, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 512, 512, 3, 3]).
size mismatch for stylegan_decoder.style_convs.7.modulated_conv.weight: copying a param with shape torch.Size([1, 256, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 512, 512, 3, 3]).
size mismatch for stylegan_decoder.style_convs.7.modulated_conv.modulation.weight: copying a param with shape torch.Size([256, 512]) from checkpoint, the shape in current model is torch.Size([512, 512]).
size mismatch for stylegan_decoder.style_convs.7.modulated_conv.modulation.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for stylegan_decoder.style_convs.8.modulated_conv.weight: copying a param with shape torch.Size([1, 128, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 256, 512, 3, 3]).
size mismatch for stylegan_decoder.style_convs.8.modulated_conv.modulation.weight: copying a param with shape torch.Size([256, 512]) from checkpoint, the shape in current model is torch.Size([512, 512]).
size mismatch for stylegan_decoder.style_convs.8.modulated_conv.modulation.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for stylegan_decoder.style_convs.9.modulated_conv.weight: copying a param with shape torch.Size([1, 128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 256, 256, 3, 3]).
size mismatch for stylegan_decoder.style_convs.9.modulated_conv.modulation.weight: copying a param with shape torch.Size([128, 512]) from checkpoint, the shape in current model is torch.Size([256, 512]).
size mismatch for stylegan_decoder.style_convs.9.modulated_conv.modulation.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for stylegan_decoder.style_convs.10.modulated_conv.weight: copying a param with shape torch.Size([1, 64, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 128, 256, 3, 3]).
size mismatch for stylegan_decoder.style_convs.10.modulated_conv.modulation.weight: copying a param with shape torch.Size([128, 512]) from checkpoint, the shape in current model is torch.Size([256, 512]).
size mismatch for stylegan_decoder.style_convs.10.modulated_conv.modulation.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for stylegan_decoder.style_convs.11.modulated_conv.weight: copying a param with shape torch.Size([1, 64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 128, 128, 3, 3]).
size mismatch for stylegan_decoder.style_convs.11.modulated_conv.modulation.weight: copying a param with shape torch.Size([64, 512]) from checkpoint, the shape in current model is torch.Size([128, 512]).
size mismatch for stylegan_decoder.style_convs.11.modulated_conv.modulation.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for stylegan_decoder.style_convs.12.modulated_conv.weight: copying a param with shape torch.Size([1, 32, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 64, 128, 3, 3]).
size mismatch for stylegan_decoder.style_convs.12.modulated_conv.modulation.weight: copying a param with shape torch.Size([64, 512]) from checkpoint, the shape in current model is torch.Size([128, 512]).
size mismatch for stylegan_decoder.style_convs.12.modulated_conv.modulation.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for stylegan_decoder.style_convs.13.modulated_conv.weight: copying a param with shape torch.Size([1, 32, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 64, 64, 3, 3]).
size mismatch for stylegan_decoder.style_convs.13.modulated_conv.modulation.weight: copying a param with shape torch.Size([32, 512]) from checkpoint, the shape in current model is torch.Size([64, 512]).
size mismatch for stylegan_decoder.style_convs.13.modulated_conv.modulation.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for stylegan_decoder.to_rgbs.3.modulated_conv.weight: copying a param with shape torch.Size([1, 3, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 3, 512, 1, 1]).
size mismatch for stylegan_decoder.to_rgbs.3.modulated_conv.modulation.weight: copying a param with shape torch.Size([256, 512]) from checkpoint, the shape in current model is torch.Size([512, 512]).
size mismatch for stylegan_decoder.to_rgbs.3.modulated_conv.modulation.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for stylegan_decoder.to_rgbs.4.modulated_conv.weight: copying a param with shape torch.Size([1, 3, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 3, 256, 1, 1]).
size mismatch for stylegan_decoder.to_rgbs.4.modulated_conv.modulation.weight: copying a param with shape torch.Size([128, 512]) from checkpoint, the shape in current model is torch.Size([256, 512]).
size mismatch for stylegan_decoder.to_rgbs.4.modulated_conv.modulation.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for stylegan_decoder.to_rgbs.5.modulated_conv.weight: copying a param with shape torch.Size([1, 3, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 3, 128, 1, 1]).
size mismatch for stylegan_decoder.to_rgbs.5.modulated_conv.modulation.weight: copying a param with shape torch.Size([64, 512]) from checkpoint, the shape in current model is torch.Size([128, 512]).
size mismatch for stylegan_decoder.to_rgbs.5.modulated_conv.modulation.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for stylegan_decoder.to_rgbs.6.modulated_conv.weight: copying a param with shape torch.Size([1, 3, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 3, 64, 1, 1]).
size mismatch for stylegan_decoder.to_rgbs.6.modulated_conv.modulation.weight: copying a param with shape torch.Size([32, 512]) from checkpoint, the shape in current model is torch.Size([64, 512]).
size mismatch for stylegan_decoder.to_rgbs.6.modulated_conv.modulation.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for condition_scale.3.0.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for condition_scale.3.0.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for condition_scale.3.2.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for condition_scale.3.2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for condition_scale.4.0.weight: copying a param with shape torch.Size([64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for condition_scale.4.0.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for condition_scale.4.2.weight: copying a param with shape torch.Size([64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for condition_scale.4.2.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for condition_scale.5.0.weight: copying a param with shape torch.Size([32, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
size mismatch for condition_scale.5.0.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for condition_scale.5.2.weight: copying a param with shape torch.Size([32, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
size mismatch for condition_scale.5.2.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for condition_scale.6.0.weight: copying a param with shape torch.Size([16, 16, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for condition_scale.6.0.bias: copying a param with shape torch.Size([16]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for condition_scale.6.2.weight: copying a param with shape torch.Size([16, 16, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for condition_scale.6.2.bias: copying a param with shape torch.Size([16]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for condition_shift.3.0.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for condition_shift.3.0.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for condition_shift.3.2.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for condition_shift.3.2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for condition_shift.4.0.weight: copying a param with shape torch.Size([64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for condition_shift.4.0.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for condition_shift.4.2.weight: copying a param with shape torch.Size([64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for condition_shift.4.2.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for condition_shift.5.0.weight: copying a param with shape torch.Size([32, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
size mismatch for condition_shift.5.0.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for condition_shift.5.2.weight: copying a param with shape torch.Size([32, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
size mismatch for condition_shift.5.2.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for condition_shift.6.0.weight: copying a param with shape torch.Size([16, 16, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for condition_shift.6.0.bias: copying a param with shape torch.Size([16]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for condition_shift.6.2.weight: copying a param with shape torch.Size([16, 16, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for condition_shift.6.2.bias: copying a param with shape torch.Size([16]) from checkpoint, the shape in current model is torch.Size([32]).
ls: cannot access 'results/cmp': No such file or directory

@xinntao
Copy link
Member

xinntao commented Aug 10, 2021

Use the latest colab:

https://colab.research.google.com/drive/1Oa1WwKB4M4l1GmR7CtswDVgOCOeSLChA?usp=sharing

For GFPGANv1 model, you should the following cmd:
!python inference_gfpgan.py --model_path experiments/pretrained_models/GFPGANv1.pth --test_path inputs/cropped_faces --save_root results --arch original --channel 1

@Ghee36 Ghee36 closed this as completed Aug 10, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants