Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

White-Box model results are not good #2

Open
yxt132 opened this issue Nov 10, 2020 · 14 comments
Open

White-Box model results are not good #2

yxt132 opened this issue Nov 10, 2020 · 14 comments

Comments

@yxt132
Copy link

yxt132 commented Nov 10, 2020

Great work in re-implementing the white-box model using pytorch. However, after testing, I found the results are not good as the official version by the authors. There is still a large gap. What do you think could be the reason? Do we need the train the model longer or something else?

@zhen8838
Copy link
Owner

Yes, I also think the white-box model not as good as the official version.

  1. I found the re-implemented model predicts the result has a wild color(you can see right-top area). The original repo has the same problem.
    animegan_test2_out

  2. re-implemented model predicts anime-style is not obvious enough.

When I re-implementing, I spend a lot of time ensuring the color shift gudied_filter will have the same behavior. I think the training step is the same as the official version.

Now I try to use the same hyperparameters training both steps in the official version and my version. This model training costs a lot of time in superpixel, So I need some time to test.

If you can help find which code has a problem, I will be very grateful.

@yxt132
Copy link
Author

yxt132 commented Nov 10, 2020

thanks for your quick response! I have not started the training yet. I will let you know if I figure out something. By the way, which superpixel method did you use in your training. I wonder how much impact the superpixel method has on the results.

@zhen8838
Copy link
Owner

My default superpixel method during training is consistent with the one mentioned in the author’s paper. He uses the superpixel method of adaptive brightness to increase the brightness of the output image, and the parameters I use are consistent with the official code, sigma=1.2 seg_num=200, you can check my config file.

@zhen8838
Copy link
Owner

official code use selective_adacolor superpixel method training 15999 steps results:
15999_face_photo
15999_face_result
15999_scenery_photo
15999_scenery_result

@yxt132
Copy link
Author

yxt132 commented Nov 13, 2020

Not bad. I noticed some strange color in the generated pictures though.

image

how is the pytorch version's training? any progress?

@zhen8838
Copy link
Owner

test images official code pytorch version
actress2 actress2 actress2_out
china6 china6 china6_out
food6 food6 food6_out
food16 food16 food16_out
liuyifei4 liuyifei4 liuyifei4_out
london1 london1 london1_out
mountain4 mountain4 mountain4_out
mountain5 mountain5 mountain5_out
national_park1 national_park1 national_park1_out
party5 party5 party5_out
party7 party7 party7_out

@yxt132
Copy link
Author

yxt132 commented Nov 13, 2020

Well done! It seems the pytorch version's results are smoother than the official tf version. What changes did you make in the pytorch version? I acutally like the pytorch version's results better. Can you update your repo and release the updated trained weights? Again, great work!

@zhen8838
Copy link
Owner

Hi. I add new weights in google drive, you can find in the readme. I also upload tensorflow version weights named whitebox-tf.zip.

@zhen8838
Copy link
Owner

I found the strange color caused by guided filter, but now I didn't find a better method to solve it.

@yxt132
Copy link
Author

yxt132 commented Nov 14, 2020

the author said you could train without guided filter and add guided filter during inference.

@zhen8838
Copy link
Owner

ok. I will try if time permits

@GustavoStahl
Copy link

One thing that you could try with the colors is to use Color Transfer algorithm, like this from PyImageSearch.

Also, regarding cartoon noise, the guided filter should help on the post-process, by using lower values of epsilon (ε), like in WhiteBox's cartoonize.py.

@zhen8838
Copy link
Owner

@GustavoStahl thanks, I missing the test_code ε value is not equal to train_code ε. But I don't think the color transfer algorithm is needed. For this model, it needs to keep the original color as much as possible and only increase the brightness. Regarding the degree of texture, I think it can be adjusted g_gray_weight. like this.

@huangfuyang
Copy link

I found the strange color caused by guided filter, but now I didn't find a better method to solve it.

adding np.clipping() after guided filter would solve the artifacts

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants