Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about multiple style transfer #21

Closed
gogolgrind opened this issue Nov 14, 2016 · 1 comment
Closed

Question about multiple style transfer #21

gogolgrind opened this issue Nov 14, 2016 · 1 comment

Comments

@gogolgrind
Copy link

gogolgrind commented Nov 14, 2016

Hello.

I working on project which require multiple style transfer, but now i started from lasagne recipe implementation.

Can you describe or share information how to implement multiple style transfer in efficient manner?

Thanks and best regards

@titu1994
Copy link
Owner

titu1994 commented Nov 14, 2016

Multiple style transfer is an extension of Masked Style Transfer. It utilizes masks to determine which area is affected by which style.

Consider ordinary masked style transfer. White color in the mask is 255, which is normalized to 1.0, whereas black color is near 0 and normalized to 0.0. These values are multiplied by each filter in the VGG network for style (say Conv 1-1, 2-2, 3-2, 4-2 and 5-2). Giving an example of the last 5-2 layer, it has 512 channels, which are all multiplied by the mask image. Wherever the mask was white, the channel values remain the same. However wherever the mask was black, the channel values are forced to become 0 as well. Therefore the gradients at that blackened region are also zero. This indicates that the style transfer at the blacked region is 0 % as well.

Now we extend masked style transfer to multiple style transfer. Consider only 2 styles for the moment (Style A and Style B). Then we can use two masks which are inverted versions of each other (Mask A and Mask A' ). Therefore the same region which is white in mask A is black in the Mask A' and vice versa. Now we multiply Mask A with Style A and Mask A' with Style B. This means that Style A will only transfer in those regions where Mask A is white and Mask A' is black. Inversely, Style B will only transfer in those regions where Mask A is black and Mask A' is white.

It is pretty simple to implement, the only thing that you need are binary masks (masks with normalized values to 1.0 and 0.0 and no intermediate values, else styles will "bleed" into each other). Multiply the value of each mask with the appropriate style for every channel of every style layer that you are using.

Note that this comes at the heavy cost of execution time. Considering 2 masks and 2 styles, the ordinary style transfer takes roughly 80 seconds per iteration, compared to 14 seconds for single style transfer (on a 980M GPU).

There is however a post processing method by which Two Styles can be transferred very quickly. There are a few steps to such an approach, and it is restricted to only 2 styles that can be transferred. This is due to the binary masks. Steps:

  1. Perform single style transfer using Style A on content image and save image as IMG1 (without Mask A, we will do this all in a post processing)
  2. Perform single style transfer using Style B on content image and save image as IMG2 (without Mask A')
  3. Now, use the masked_transfer.py script with Content Image as MIMG1, Generated Image as MIMG2 and mask image as with Mask A.

This will create a final image, with two different styles in two different region. The drawback of such a method are :

  • Very sharp borders of the two styles (looks unnatural)
  • Color mismatch between two styles

On the topic of multiple style transfer, there is a very good paper by a few Google researchers on training a network to learn multiple styles at once using conditional instance normalization A Learned Representation For Artistic Style. This allows you to train a single network to learn various styles, however the implementation is quite complicated and I haven't been able to implement it in Keras yet. There is a Tensorflow implementation via the Magenta team which you can look into.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants