Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inquiry Regarding Configuration and Download Issues in Reproducing Your Research #2

Open
Sourabs-kms opened this issue Jan 14, 2024 · 4 comments

Comments

@Sourabs-kms
Copy link

I am currently working on reproducing your research results and have encountered some challenges during the configuration process. I would greatly appreciate it if you could spare some time to help me navigate through these issues. Your guidance will be immensely beneficial, and I want to express my heartfelt gratitude in advance for your assistance.

Configuration Issue in main_afhq_train.py:
When running python main_afhq_train.py, I encountered the following error: main_afhq_train.py: error: the following arguments are required: --config. Could you please provide guidance on how to set up the configuration file (--config)? I have already downloaded the weights for the two models as you mentioned earlier, but I couldn't locate where to configure these weights in the code. Could you kindly direct me on how to set up this file and associate the downloaded weights with their corresponding folders?

Download Issue in main.py:
Upon executing the command python main.py --attack --config celeba.yml --exp experimental_log_path --t_0 500 --n_inv_step 40 --n_test_step 40 --n_precomp_img 100 --mask 9 --diff 9 --tune 0 --black 0, I encountered a download request for a file at the URL "https://image-editing-test-12345.s3-us-west-2.amazonaws.com/checkpoints/celeba_hq.ckpt." Unfortunately, I received a 403 Forbidden error (urllib.error.HTTPError: HTTP Error 403: Forbidden). I attempted to open the link in a browser, but it also led to access issues. Is there an alternative download location for these files?

@steven202
Copy link
Owner

Hi,

For the downloading issue, maybe the model weights here can help:
"https://onedrive.live.com/?authkey=%21AOIJGI8FUQXvFf8&id=72419B431C262344%21103807&cid=72419B431C262344"

For the configuration issue, you can safely ignore the arguments --config, and I don't think it is used anywhere.

Thank you.

@Sourabs-kms
Copy link
Author

"Regarding the download issue, perhaps the model weights available at this link could be of assistance: https://onedrive.live.com/?authkey=%21AOIJGI8FUQXvFf8&id=72419B431C262344%21103807&cid=72419B431C262344

Hello, I would like to understand how the weights within this link correspond to the weight section in your code. The names seem to be different, and the file extensions are also not the same. Using them directly would likely result in an error, right? Could you provide guidance on how to appropriately match and use them in your code?"

@Sourabs-kms
Copy link
Author

   您好,这个谷歌云盘里边的模型权重感觉存在着问题,这里边的文件大小都一样,而我把他设置为预训练权重时,代码报错提示大小不匹配的问题。模型结构和预训练权重的输出层匹配,以便正确加载权重,在afhq的预训练权重的时候,设置之后提示它不匹配,说明这个模型权重不匹配,想问一下您那边还有当初实验训练的权重文件吗?我想复现您的结果,但是目前存在着一些预训练权重无法下载,以及下载之后权重不适配的问题,您如果有时间的话,能指导我一下吗?或者告诉我如何下载到适配的模型权重。

File "/root/anaconda3/envs/kms/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1497, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for DataParallel:
size mismatch for module.fc.weight: copying a param with shape torch.Size([307, 512]) from checkpoint, the shape in current model is torch.Size([3, 512]).
size mismatch for module.fc.bias: copying a param with shape torch.Size([307]) from checkpoint, the shape in current model is torch.Size([3]).
在这里真诚的请求您的帮助,非常感谢您,祝您小年快乐!!!万事胜意!

@steven202
Copy link
Owner

Hi,

For all the model weights, I basically use model weights from DiffusionClip:
https://github.com/gwang-kim/DiffusionCLIP

Hope this helps.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants