-
Notifications
You must be signed in to change notification settings - Fork 781
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Questions about results from with my own dataset #75
Comments
Hi, first thanks for your interest in our work and sharing some of your results. Here are some answers that may help:
|
OK, I will try to create more training samples. Also, I mean that the pre-trained model is what I trained based my own dataset and your default hyper-parameters setting not the one you provided. Should I change the hyper-parameters if I want to refine it? |
OK. Thanks for your help. I'll try to train a new model based on more training samples ASAP and I will give you my latest results after several days. |
@JiahuiYu ,Sorry to bother you again. I want to re-implement the deepfill v2 based on your deepfill v1 since deepfill v1 cannot handle irregular masked images. Thus, I want to confirm that the gated convolution layers are only used in the coarse network. Should I have to replace the vanilla convolution layers with gated convolution layers in the refinement network? Thanks for your help. |
Gated convolution are used in both networks. I think it is important to use gated convolution in refinement network as well. |
@JiahuiYu Thanks for your help. I see you mentioned it in your paper. Sorry to bother you again, I still need to confirm some changes in Deepfill V2 implementation: Is it right? Therefore, I do not need to concatenate the ones and the mask for the input like this:
3.In your paper, you said the contextual attention layer is the same as V1. Therefore, should the input for contextual attention layer include the binary mask? (In my opinion, I will put the mask into that layer) 4.The gan loss of deepfill V1 is based on neural gym and you use this setting https://github.com/pfnet-research/sngan_projection/blob/master/updater.py to calculate the gan loss. Can I define this kind of loss in neuralgym? |
@xhh232018 Hi first thanks for your interest and I saw you already carefully read the paper and code. I appreciate. For your questions:
|
Hi, Jiahui! After an one-week training on a GTX 1080TI, I found some interesting results from my own dataset.
There are 2 kinds of images in my dataset. One is the images with clearly texture like this:
The inpainting results of this kinds of images are semantic plausible:
Also, there are some images like this that contains more information and structure:
However, the result of this image from my pre-trained model is quite blurry and bad:
Here are my hypothesis:
Here are the screenshots from Tensorboard:
The text was updated successfully, but these errors were encountered: