Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About inference #12

Open
dongdongdong233 opened this issue Oct 10, 2022 · 13 comments
Open

About inference #12

dongdongdong233 opened this issue Oct 10, 2022 · 13 comments

Comments

@dongdongdong233
Copy link

Hi,
Thanks for the excellent code, when I inference the pictures, My program output is:

KeyError: 'relative_path'

Then I found "relative_path" is a key of "data_i", but when I print data_i, I found there doesn't have the key named "relative_path" in the "data_i". So what is "relative_path"?
Please reply!

@liuqk3
Copy link
Owner

liuqk3 commented Oct 11, 2022

Hi @dongdongdong233 ,

Thanks for your interests.

Sorry for this mistake. The relative_path is just used to save the completed results. You can simply modify the config file like the following example:

图片

@dongdongdong233
Copy link
Author

Hi, thanks so much for your replying!
After I add relative_path to the config file, this problem have been solved!
But there is another output like:
AttributeError: 'PatchVQGAN' object has no attribute 'generate_content'
I found there is no such a function named 'generate_content', the 'generate_content' is written in masked_image_inpainting_transformer_in_feature.py, so I copied the generate_content method to the end of the PatchVQGAN class in patch_vqgan.py, but the program still reported the same error.
Could you please help me to solve this problem?
Looking forward to your reply!

@dongdongdong233
Copy link
Author

dongdongdong233 commented Oct 13, 2022 via email

@liuqk3
Copy link
Owner

liuqk3 commented Oct 14, 2022

Hi @dongdongdong233

P-VQVAE is just and auto-encoder, which can not inpaint the images but only reconstruct the images. If you want to inpaint an image, you need to train a UQ-Trainsformer further (or use our pretrained models). After that, you can use the command python scripts/inference.py --name path/to/transformer/last.pth --func inference_complet_sample_in_feature_for_evaluation --gpu 0 --batch_size 1

@zborger
Copy link

zborger commented Nov 1, 2022

您好,感谢您优秀的工作。我下载您的预训练模型进行推理测试遇到一些问题想请教您。
image
在inference之前,我首先将自己准备的6张ffqh测试图和mask分别放入新建的文件夹下。然后修改OUTPUT/model_name/configs/config.yaml 文件中的data_root,provided_mask_name 和 name,用来指向自己存放的测试图路径。但是在运行inference后没有出现测试图的结果,并显示Missing keys in created model:[]Unexpected keys in state dict: []Found masks length 0, set to None等提示。我想程序并没有接收我指定的测试图,想请教您如果要测试少量自己准备的图片,我是还需要更改.yaml中的其他地方吗?期待您的解答。

@liuqk3
Copy link
Owner

liuqk3 commented Nov 2, 2022

From the logs you provided, it seems that the images are not located correctly, which means 0 images need to be inpainted. You can have a debug about the class ImageListDataset in PUT/image_synthesis/data/image_list_dataset.py to see how the images are located in these codes and provid correct paths to your images and masks.

Thanks.

@zborger
Copy link

zborger commented Nov 2, 2022

From the logs you provided, it seems that the images are not located correctly, which means 0 images need to be inpainted. You can have a debug about the class ImageListDataset in PUT/image_synthesis/data/image_list_dataset.py to see how the images are located in these codes and provid correct paths to your images and masks.

Thanks.

Thanks for your prompt reply,I‘ll try it.

@zhangbaijin
Copy link

你好,在测试的过程中,我发现测试的mask似乎并不是我指定的mask,我想问一下应该如何设定呢,而且测试的mask覆盖率普遍在百分之30之上?

@liuqk3
Copy link
Owner

liuqk3 commented Nov 22, 2022

@zhangbaijin

You can have a look at this 12 #issue comment. Or you can modify 'inference.py' to load masks from another directory.

@Lecxxx
Copy link

Lecxxx commented Nov 24, 2022

@liuqk3
Hi,
Thanks for the excellent work, when I run the inference.py, My program output is:

Traceback (most recent call last):
File "scripts/inference.py", line 859, in
launch(inference_func_map[args.func], args.ngpus_per_node, args.num_node, args.node_rank, args.dist_url, args=(args,))
File "/data/zixuan/PUT/image_synthesis/distributed/launch.py", line 52, in launch
fn(local_rank, *args)
File "scripts/inference.py", line 204, in inference_reconstruction
rec = model.decode(token['token'], combine_rec_and_gt=False, token_shape=token.get('token_shape', None))
File "/data/zixuan/anaconda3/envs/ImgSyn/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/data/zixuan/PUT/image_synthesis/modeling/codecs/image_codec/patch_vqgan.py", line 1501, in decode
rec = self.decoder(quant, self.mask_im_tmp, mask=self.mask_tmp)
File "/data/zixuan/anaconda3/envs/ImgSyn/lib/python3.7/site-packages/torch/nn/modules/module.py", line 948, in getattr
type(self).name, name))
AttributeError: 'PatchVQGAN' object has no attribute 'mask_im_tmp'

Could you please help me to solve this problem?
Looking forward to your reply!

@Hgy12345
Copy link

I also encountered this problem.

@liuqk3
Copy link
Owner

liuqk3 commented Nov 28, 2022

@Lecxxx ,

I think the error is thrown by the reconstruction function. Currently, the pretrained P-VQVAE is not seperately provided. You can try the inpainting function.

@liuqk3
Copy link
Owner

liuqk3 commented Dec 15, 2023

@Lecxxx @Hgy12345 While inpainting with inference.py or inference_inpainting.py, the argument --func inference_inpainting is required. Please read readme.md carefully.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants