Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About training with width-3 #5

Closed
MikeXuQ opened this issue Mar 4, 2019 · 2 comments
Closed

About training with width-3 #5

MikeXuQ opened this issue Mar 4, 2019 · 2 comments

Comments

@MikeXuQ
Copy link

MikeXuQ commented Mar 4, 2019

Hello, thanks for providing the code. I am really interested in the work. I want to make the line generated by the model thinner. So I trained it as you inform. And I changed the aug_folder from width-5 to width-3. The hyper-parameter I used will appear in the end. But after training, the model can just generate a picture containing nothing. Can you help me solve this problem?
python train.py
--name width3
--dataroot ${dataDir}/ContourDrawing/
--checkpoints_dir ${dataDir}/Photosketch/Checkpoints/
--model pix2pix
--which_direction AtoB
--dataset_mode 1_to_n
--no_lsgan
--norm batch
--pool_size 0
--output_nc 1
--which_model_netG resnet_9blocks
--which_model_netD global_np
--batchSize 2
--lambda_A 200
--lr 0.0002
--aug_folder width-3
--crop --rotate --color_jitter
--niter 400
--niter_decay 400 \

@mtli
Copy link
Owner

mtli commented Mar 10, 2019

TL;DR: you can try to thin the output in post-processing instead of re-training the model.

Hi, thanks for your interest. The width of the sketch is a tricky problem. First of all, training with width-3 is feasible, but might require a different lambda_A. I suggest you train first with the default width-5 for sanity check. Second, we use width-5 to train the model is because width-5 provides stronger supervision signal and thus better performance. If you want to get the same performance as width-5, you might want to consider sample balancing techniques used in boundary detection, e.g. a weight the loss by the inverse of the ratio between foreground and background pixels. Coming back to your task of getting thinner output, I would suggest you to apply some thinning operations in the post-processing step instead of re-training the model, for example, check out bwmorph.

@mtli mtli closed this as completed Mar 10, 2019
@MikeXuQ
Copy link
Author

MikeXuQ commented Mar 11, 2019

I get it! Thanks for your reply!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants