Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What do you think of the "trapped-ball segmentation" in this paper? #127

Open
lllyasviel opened this issue Jun 23, 2017 · 7 comments
Open

Comments

@lllyasviel
Copy link

lllyasviel commented Jun 23, 2017

qq 20170623225951

cGAN-based Manga Colorization Using a Single Training Image
https://arxiv.org/abs/1706.06918

@powion

qq 20170623224452

The author uses a "trapped-ball segmentation" to divide the painting into several parts, as a hint to generator. Though it may only handle with closed figures, to some extent, it is possiable to be helpful to make the final output sharp and clear.

However, the examples' inputs in this paper are not sketches but grayscale images with full texture and full shadow, such as:

qq 20170623225330

Because it is very much easier to colorize such informative gray-scaled images than real sketches, maybe the final result will not be so well when the "trapped-ball segmentation" is attached to a sketch colorize AI.

But whether there are any methods to take the advantage of such "segmentation" when we colorize a real sketch?

@taizan
Copy link
Contributor

taizan commented Jun 24, 2017

Thanks for sharing information, personally I think using segmentation information is useful for colorization.
Because even human could not know ”actual” color of a sketch, but human can solve segmentation problem even from rough sketch. This difference can be significant one.

Im not so confident about actual method, but one idea is making dataset of sketch to segment using photo segmentation data and line extraction network.
Pre-trained segmentation network for sketch may useful to add segmentation ch into input or to use its middle layer for encoding feature.

@lllyasviel
Copy link
Author

Do you think "paintschainer" itself is actually, to some extent, already a good segment network?
You know, when a sketch is feeded to the paintschainer, the paintschainer always has a tendency to paint same color on same semantic area.
For example, paintschainer always paints yellow hair.

1

2

3

Because AI do not know what is the color of each image's hair, so the AI marks the hair yellow.
Maybe the network want to tell us that hair is marked in yellow, instead of the hair is yellow.
So maybe we can use paintschainer as a AI to solve segmentation problem before actually draw on sketches?

@taizan
Copy link
Contributor

taizan commented Jun 24, 2017

hmm.
I think Tampopo tend to like use yello because it is near to skin color and low risk color to use it on hair.
PaintsChainer could extract some feature from sketch and it is solving segmentation problem partially as you said.
So it is also possible to use paintschainer itself for better colorization , actual human also doing same iterative approach.

@lllyasviel
Copy link
Author

lllyasviel commented Jun 24, 2017

https://arxiv.org/abs/1706.06759
Comicolorization : Semi-automatic Manga Colorization

1

2

Hmm, so many paper related to style/color ref colorization recently.
As well as "cGAN-based Manga Colorization Using a Single Training Image", all of these are A+B=C solution.
All of these papers choose manges instead of sketches, to avoid the texture and shadow reconstruction.
(but I use sketches)
I feel so lucky that I submitted my paper at 11 Jun to eat the first crab.😃

@taizan
Copy link
Contributor

taizan commented Jun 25, 2017

Yes, the papers in this field is very competitive...
Congratulation about your establishment.
I also want make more applications and services.

@lllyasviel
Copy link
Author

By the way, how you set learning rate to your very giant 600,000 dataset?
In your code, the lr for adam is 1e-5, but I think it is so high. with 1e-5 lr, the network tends to overfit about the last 10000 images and thus ignore all images before the last 10000.
Do you have a learning rate schedule or some tricks to tune your lr?

@taizan
Copy link
Contributor

taizan commented Jun 29, 2017

Actually I changed the learning rate manually... for fine tuning.
But I dont know it was effective or not.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants