Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support specifying the image scaling algorithm for image resizing/caching #6986

Merged
merged 6 commits into from Jul 13, 2015

Conversation

Montellese
Copy link
Member

These commits add the possibility to specify the image scaling algorithm to be used for image resizing/caching using CPicture (no support for RPI's OMXImage). The scaling algorithm can be specified in two ways:

  • through <imagescalingalgorithm> in advancedsettings.xml (which will automatically apply to all images being cached but only if they are resized)
  • as a scaling_algorithm option in the image:// URL (this will overrule whatever has been set in advancedsettings.xml)

I wasn't sure whether to put the option in advancedsettings.xml or as a GUI setting. I put it in advancedsettings.xml because the maximum resolution settings are there as well but having it as a GUI setting would have the advantage that we could specify different default depending on the platform.

Due to the support for a scaling_algorithm option in the image:// URL this can also be used when retrieving resized images through the webserver.

I've noticed that there is also some resizing logic in CJpegIO which doesn't benefit from this change. I also don't know how we resize images/textures in the GUI so this only affects the quality of images we cache.

This is something that has bothered me for a while now and manually increasing the maximum image size/resolution of cached images through advancedsettings.xml hasn't been good enough. And I don't seem to be the only one as there are several threads in the forum about this. One of them is http://forum.kodi.tv/showthread.php?tid=200401.

Needs Xcode sync

@Montellese Montellese added the Type: Improvement non-breaking change which improves existing functionality label Apr 20, 2015
@ace20022
Copy link
Member

First, nice addition.

I also don't know how we resize images/textures in the GUI so this only affects the quality of images we cache.

That is the real problem imo. When I played around with sws_scale I could not see any differences in quality, at least with posters.

@Montellese
Copy link
Member Author

It's part of the problem. A while ago I noticed that when the scanner picked up a new movie in my library and I immediatly opened its info view to see the poster almost fullscreen that it looked nice but when I waited a bit, closed the dialog and re-opened it, the poster looked horrible. I then realized that when I opened the info view for the first time the poster hadn't been cached yet so what I saw was the original poster from the metadata site. Then the caching was done in the background and when I re-opened the dialog I got to see the horrible cached version.
Increasing the maximum resolution of cached images improved this a bit but it wasn't good enough. But the fact that at the beginning I was seeing a nice quality poster made it obvious that our caching sucks.

I haven't been able to test this PR with my library yet (I didn't have any uncached images) but I tried all scaling algorithms through the webserver's image transformation functionality and there you can see very obvious differences (as expected). So IMO being able to store the cached images in a better quality is one step to getting a more beatiful UI experience.

@ace20022
Copy link
Member

In case I wasn't clear enough, I really appreciate this addition!

@da-anda
Copy link
Member

da-anda commented Apr 20, 2015

what happened to the image cache improvements by JM where he allowed to request different sizes for cached images (power of two with closest match to requested size). If we'd make use of this in skins then we could cache the unaltered original and create thumbs in desired sizes, which in addition should improve picture quality and probably lower memory usage.

@Montellese
Copy link
Member Author

@ace20022: I got that the first time, thanks :-)

@da-anda: We only ever talked about it. Nobody wrote any code and IMO our cache is already getting way out of hand with a single cached version per image. That's why the current image transformation of the webserver always does the resizing on the fly and leaves the caching to the browser/client.

Either way caching multiple versions with different sizes of the same image doesn't help with quality if you do the scaling with a bad algorithm.

@Hitcher
Copy link
Contributor

Hitcher commented Apr 22, 2015

It's more a problem of images not being scaled nicely in the GUI than when they're cached. Even a direct link to an image (eg not cached) will look 'nasty' when scaled down Kodi. This also applies to textures used in skins - if they're not scaled to the sizes used in the skin they'll suffer.

@Montellese
Copy link
Member Author

@HitcherUK: Yeah that's a problem too but I don't really know where that scaling happens.

But I also looked at the cached images (media item artwork) and they looked like crap and if they are then scaled again they look even worse ;-)

@popcornmix
Copy link
Member

GUI scaling will happen in the renderer (e.g. OpenGL). It will use bilinear scaling. This will be pretty poor for large scale factors, so if the skin displays small posters or covers (e.g. in a wall) then you would be better off setting imageres/fanartres lower to be closer to the actual viewed size.

In theory enabling mipmaps (trilinear filtering) might improve the image when downscaling by a large factor, but I wouldn't recommend it - it is expensive in both memory (texture memory is roughly doubled) and cpu/gpu (when a texture is uploaded, it gets downscaled several times). Might be worth a trying in a test build.

@da-anda
Copy link
Member

da-anda commented Apr 22, 2015

so wouldn't it help then if the image cache would deliver already downscaled versions of the images? So basically the stuff I wrote above and that JM was up to?

@Hitcher
Copy link
Contributor

Hitcher commented Apr 22, 2015

The problem being what sizes to cache them at as different views/skins use dozens of different sizes.

@da-anda
Copy link
Member

da-anda commented Apr 22, 2015

The approach of JM was to use "powers of two", so beginning with the original resolution you create versions with half the resolution and repeat this step until you reached a min width/height of XX px. The image cache would then return the cached image with the size that comes closest to the requested one.

Otoh we could ofc cache the requested sizes themselves, but this would require a solid garbage collection (delete anything where last access was X days/weeks ago and recreate on demand).

edit: the "powers of two" approach was preferred by him because these seem to be the fastest to generate. Probably depends on the used algorythm though. @jmarshallnz in case you're still reading GIT spam, please correct me if I'm wrong on your approach.

@Montellese
Copy link
Member Author

@da-anda: You'll still need a better scaling algorithm than we use right now. Otherwise you create a lot of smaller images which are already crap quality which results in the same as not creating them and let the renderer create the crap quality.

But that's basically what mip mapping does anyway as proposed by @popcornmix. But I always thought that mip mapping only works for square images and there you go all the way down to 2x2 or even 1x1.

Furthermore you'd probably have to change quite a few things because the texture cache probably doesn't know in which size the renderer would like it's images.

@da-anda
Copy link
Member

da-anda commented Apr 22, 2015

@Montellese sure, the cached images need to look good ofc. As a sidenote - the "mipmap cache" would also be of benefit to all webinterfaces and remote clients (like smartphone remotes) if we additionally extend JsonRPC to accept image dimensions as param. But let's not get too deep into this in here.

@Montellese Montellese force-pushed the image_scaling_algo branch 2 times, most recently from 3fbdfda to 29e64d0 Compare April 27, 2015 07:56
@Seantimez
Copy link

I agree this needs fixing. The application as a whole is excellent, and this minor tweak will increase the UX tremendously.

The pixelation of the images is the main problem.

From the looks of this and the issues that arise when changes are made, I strongly suspect that the image isn't just being re-sized for the cache - it's being (overly)compressed. Compression varies by level or percent, and is usually a parameter passed to a re-sizing algorithm. So this isn't a resolution issue - this is a compression issue. And it's likely not a matter of replacing the algorithm, just editing the parameter that tells the algorithm how much compression to use.

Someone with know-how should check to see what parameters are being passed to the resizing algorithm when it's asked to resize images for the cache, and change the one related to compression to something much less drastic (or none at all).

Thanks - and I hope that helps.

@Montellese
Copy link
Member Author

@Seantimez: If you read the code you will see that CPicture::CacheTexture is basically just a resize (+ an optional orientation/mirroring step which must be manually triggered by the user) using a specific scaling algorithm. I'm no swscale expert and I don't know all the options it supports but I couldn't find anything that would allow compression. But obviously that could still happen in the IImage implementation that is used to write the actual file (JPG, PNG, ...).
Either way changing the scaling algorithm already makes a huge difference. So the problem is not the resolution but the scaling algorithm being used to resize an image to a different resolution.
A quick google search will show multiple results that say that using SWS_FAST_BILINEAR as a scaling algorithm (which is what we use right now) will give horrible results.

@afedchin
Copy link
Member

@Seantimez If you look at the code then you can see what we use 90% jpeg compression quality for images. Do you think what it isn't enough?

@MartijnKaijser
Copy link
Member

I think additional issue is that we cache at 90% while the original images may already be saved at 90%

@Seantimez
Copy link

Sascha - you're correct - if the code uses swscale with SWS_FAST_BILINEAR it should be changed to SWS_BILINEAR. This will likely resolve the issue.

Many users will be pulling images from the web. When the images come from an online database, many will have high compression (but by high I mean pushed to the limit). If the 90% mentioned above means that the images will be saved at 90% of the quality of the original, that would likely be imperceptible with an uncompressed image. It might alter an already lossy image perceptibly, but that would depend on how lossy the original is. I can run some tests to see, but I'm guessing 90% should be a safe bet.

Does anyone know off the top of their head where the default images are scraped from?

@Montellese
Copy link
Member Author

@Seantimez: AFAIK there mostly come from https://www.themoviedb.org/ and https://fanart.tv/ (@MartijnKaijser correct me if I'm wrong). You can get the actual URLs of the images from your Kodi database.

@MartijnKaijser
Copy link
Member

Correct 99% of the images come from those websites and additionally from http://thetvdb.com

@zag2me
Copy link
Contributor

zag2me commented Jun 2, 2015

The metadata source sites vary... but 90% compression is about right for JPEGs and that makes them look fine(in my opinion). If you are compressing them again another 10% then quality loss will become an issue, but as always it depends on what methods are used.

@Seantimez
Copy link

I have limited time to look at this, so I haven't been in the code. I did try to locate the db so I could have a look. Unfortunately, it wasn't where the wiki said I should find it.

There are a few things that can cause pixelation. We've identified the first big one.

  1. SWS_FAST_BILINEAR it should be changed to SWS_BILINEAR - SWS_FAST_BILINEAR should not be used - it makes a mess of images.
  2. compressing an already compressed image - but this won't always result in a pixelated image, especially if there is substantial resizing) - and easy to test if a baseline can be established.

For example, if Kodi pulls a full-scale 2000x3000 image from themoviedb and resizes it to 500x750 and compresses it @50%, we aren't likely to see any artifacts. Same if Kodi pulls the 600x400 image and rescales it by half (even if it's compressed by half). It's a different story if the image is isn't resized, but even then, from the few test images I pulled from the site, the pixelation is nowhere near as bad as what we see on the stable Helix release.

I don't see 90% causing an issue.

  1. Upsizing - if resizing happens, a larger image must be made smaller. Going the other direction makes a mess. Also easy to check - well, easy for someone to check.

Additionally, if Kodi is attempting to display a smaller image at a higher resolution than the actual image (i.e. using a 100x150 image to fill a 200x300 space), this would be a problem.

If someone can tell me the database links to 3 images, what those three images are resized to in the default skin of the Helix standard release, and what size they are displayed at, we'll know what's causing the pixelation.

@Montellese
Copy link
Member Author

So what's the deal here (apart from the fact that it needs a rebase)? Are the current changes ok? Should we also change the default scaling algorithm from SWS_FAST_BILINEAR (which is horrible) to SWS_BILINEAR?

@MartijnKaijser
Copy link
Member

Not commenting on the code but a better image quality would be very much welcomed. So I'd say switch to the better one.

@Seantimez
Copy link

@Montellese Yes - please change the default scaling algorithm.

@MartijnKaijser
Copy link
Member

jenkins build this please

@MartijnKaijser
Copy link
Member

Failed on Linux

@Montellese
Copy link
Member Author

jenkins build this please

@Montellese
Copy link
Member Author

Linux should be fine now. jenkins build this please

@MilhouseVH
Copy link
Contributor

This is now failing to build on Linux (Ubuntu, building for OpenELEC/RPi):

Picture.cpp: In static member function 'static bool CPicture::CreateThumbnailFromSurface(const unsigned char*, int, int, int, const string&)':
Picture.cpp:95:87: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
   const bool ret = file.OpenForWrite(thumbFile, true) && file.Write(thumb, thumbsize) == thumbsize;
                                                                                       ^
Picture.cpp: In static member function 'static bool CPicture::CacheTexture(uint8_t*, uint32_t, uint32_t, uint32_t, int, uint32_t&, uint32_t&, const string&, CPictureScalingAlgorithm::Algorithm)':
Picture.cpp:222:27: error: 'None' is not a member of 'CPictureScalingAlgorithm'
   if (scalingAlgorithm == CPictureScalingAlgorithm::None)
                           ^
../../Makefile.include:93: recipe for target 'Picture.o' failed
make[2]: *** [Picture.o] Error 1
make[2]: Leaving directory '/home/neil/projects/OpenELEC.tv/build.OpenELEC-RPi.arm-6.0-devel/kodi-15.0-rc1-9ff25f8/xbmc/pictures'
Makefile:466: recipe for target 'xbmc/pictures/pictures.a' failed
make[1]: *** [xbmc/pictures/pictures.a] Error 2

It had been building fine until the change of NoAlgorithm to None.

Edit: Same build error when building for Ubuntu x64 (ie. not cross-compiling etc. as before):

Picture.cpp: In static member function ‘static bool CPicture::CacheTexture(uint8_t*, uint32_t, uint32_t, uint32_t, int, uint32_t&, uint32_t&, const string&, CPictureScalingAlgorithm::Algorithm)’:
Picture.cpp:222:27: error: ‘None’ is not a member of ‘CPictureScalingAlgorithm’
   if (scalingAlgorithm == CPictureScalingAlgorithm::None)
                           ^
../../Makefile.include:93: recipe for target 'Picture.o' failed
make[1]: *** [Picture.o] Error 1
make[1]: *** Waiting for unfinished jobs....

@Montellese
Copy link
Member Author

I had to change it from None to NoAlgorithm because linux didn't like me using None as an enum name. But looks like I missed a stop (although it compiled fine on linux when I tried it).

@Montellese
Copy link
Member Author

One more time. jenkins build this please

@MilhouseVH
Copy link
Contributor

Thanks - this now builds for both Ubuntu and OpenELEC/RPi, many thanks. Although I haven't time right now to test what has been built... :)

@MartijnKaijser
Copy link
Member

jenkins build this please

@Montellese
Copy link
Member Author

jenkins build and merge

@jenkins4kodi jenkins4kodi merged commit 1a8753d into xbmc:master Jul 13, 2015
@Montellese Montellese deleted the image_scaling_algo branch July 13, 2015 13:35
@Seantimez
Copy link

Quick question - did this make it into Kodi 15.1, and if so, how does one
change the algorithm to SWS_BILINEAR?

Thanks!

Sean Ward
http://seanward.org
http://optimawebdesign.com

On Mon, Jul 13, 2015 at 8:32 AM, jenkins4kodi notifications@github.com
wrote:

Merged #6986 #6986.


Reply to this email directly or view it on GitHub
#6986 (comment).

@razzeee
Copy link
Member

razzeee commented Aug 18, 2015

It's tagged for Jarvis, so it will be in Kodi 16
If you do not want the default one, you can achive that via the advancedsettings.xml - but you will have to wait for Kodi 16

@MartijnKaijser
Copy link
Member

It's already bilinear by default

@fritsch
Copy link
Member

fritsch commented Sep 12, 2015

@Montellese I don't understand the default after reading the code, it seems to be:

CPictureScalingAlgorithm::Algorithm CPictureScalingAlgorithm::Default = CPictureScalingAlgorithm::FastBilinear

still.

And most of the time when ScaleImage is used, it goes for: ToSwscale and returns

CPictureScalingAlgorithm::Algorithm CPictureScalingAlgorithm::Default = CPictureScalingAlgorithm::FastBilinear;

which does not take the advanced_setting into account at all?

Could you clarify this a bit?

Edit: And additionally, is this algorithm then also used when I watch my holiday pictures? :-)
Edit2: To be more clear, the algo is only ever used when calling CPicture::CacheTexture ... all the rest is Fast_Bilinear?

@fritsch
Copy link
Member

fritsch commented Sep 12, 2015

To answer my own question (at least the Picture Viewer one) after reading the code: This is completely done in OpenGL and Shaders, this code here does not at all influence this behaviour.

So, then only two questions left: FastBilinear still default, right? And the advancedsetting is only used when we actually cache the picture, right?

@Montellese
Copy link
Member Author

@fritsch: This is only used for caching images/artwork into our texture cache. And the default is Bicubic: https://github.com/xbmc/xbmc/blob/master/xbmc/pictures/PictureScalingAlgorithm.cpp#L30

@fritsch
Copy link
Member

fritsch commented Sep 12, 2015

@Montellese Thx for the answer. Just seen it was changed later on: dd92043

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Type: Improvement non-breaking change which improves existing functionality
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet