Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

aspect ratio #22

Closed
ink1 opened this issue Nov 23, 2016 · 8 comments
Closed

aspect ratio #22

ink1 opened this issue Nov 23, 2016 · 8 comments

Comments

@ink1
Copy link

ink1 commented Nov 23, 2016

Hi, nice code and thank you for sharing it!
Why do you say (and require) that Gram matrix is square?
In fact your code seems to work even when this requirement is dropped out (thanks for keeping widths and heights separate).

@titu1994
Copy link
Owner

The requirement is not dropped. The gram matrix is always a square matrix.

Therefore to compute the gram matrix I resize the image to a square shape. Then perform all the vgg loss and style loss and so on using LBFGS. The final output is a square image of same size as the gram matrix.

I then resize this gram matrix to preserve aspect ratio of the original image. If you wish to see a square image, set --maintain_aspect_ratio="False"

@ink1
Copy link
Author

ink1 commented Nov 23, 2016

Sorry, that was not quite what I wanted to ask. I'm asking about image aspect ratio. I think it can be arbitrary throughout all processing steps.

Of course the Gram matrix is square because it is a cross correlation matrix. What I don't understand is why you require the width and height of a processed image to be equal. Line 130:
assert img_height == img_width, 'Due to the use of the Gram matrix, width and height must match.'

When I remove this requirement the code still works (as it should!)

@titu1994
Copy link
Owner

titu1994 commented Nov 23, 2016

That's a redundant check. I resize the image to the gram matrix size (400x400) by default, and then perform this check. The check can safely be removed, since we are rescaling the image to the gram matrix size just above that.

@ink1
Copy link
Author

ink1 commented Nov 23, 2016

Yes, I can see that. What I'm asking is why you are doing
img_width = img_height = args.img_size
instead of, for example, something like
img_width = args.img_width
img_height = args.img_height
(I also realise that you do not have the two above options atm)

What has the Gram matrix to do with the aspect ration of your image?

@titu1994
Copy link
Owner

titu1994 commented Nov 23, 2016

You are correct in the fact that the Gram matrix has nothing to do with the aspect ratio of the image.

In the original keras script, which this script is based off of, the author made sure to assert the width and height of the imported image and style were the exact same size (The comment about image needing to be same for gram matrix size has since been removed, so I will do the same).

Therefore I removed that check and performed style transfer on a 400 x 640 image as content and style. The result was worse, even after several hundred iterations.

For comparison, the first image is with 400 x 400 content image and style image :
moon lake - 400x400

Whereas the second image is with 400x640 image as input :

moon lake - 400x640

Consider the upper image with sharp features similar to the turbulence pattern from the Starry Night and the bottom image with less distinct patterns and patches of poor style transfer especially in the lower left portion of the image.

All things considered, I don't think I will preserve the aspect ratio of the loaded content and style image.

@ink1
Copy link
Author

ink1 commented Nov 24, 2016

I don't know how you are getting these results. I observe a lot less colour difference in these two resolutions using default settings over 10 iterations (VGG16). I manually rescaled both input and style to 640x400 and 400x400 for two tests in order to avoid rescaling inside of the code. Can you try the same?
640x400
out night_at_iteration_10
400x400 upscaled to 640x400
out 400_at_iteration_10 640

@titu1994
Copy link
Owner

The results seem close now. I'm travelling for a few days and won't have access to my laptop.

The 640x400 seems to preserve the text at the top right far better, along with similar style transfer in other regions. My results must be due to some other error.

Feel free to add a PR

@titu1994
Copy link
Owner

titu1994 commented Nov 28, 2016

@ink1, I found the mistake. I used the older Network.py to test my image (since it is faster), but it sacrifices quality for speed. When I switched to INetwork.py I was able to replicate your results.

As of commit 6a08eaa, the content image and style image are scaled to the content aspect ratio before passing onto the VGG network. This will drastically increase execution time (INetwork used to take 14 seconds, now it takes 23 seconds per epoch), but delivers more precise results.

Thanks for raising the issue. The results now seems to be closer to the DeepArt.io results.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants