Join GitHub today
GitHub is home to over 20 million developers working together to host and review code, manage projects, and build software together.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
Already on GitHub? Sign in to your account
get_best_level_for_downsample too strict wasting speed performance? #203
Comments
|
Actually, it seems to me that deepzoom is picking the wrong dimensions. If you start with Of course in this case you lost half a pixel data on the first downscale, and effectively 2 pixels worth of data on the next level, etc. until you hit an evenly divisible number. |
|
I guess the loss of pixels is actually the bigger deal. The For the Continuing this line of reasoning to the higher levels, |
jetic83
commented
Apr 21, 2017
|
Thanks, @jaharkes, for your thoughts. I tried to feed Deepzoom with your expected dimensions by changing But then, the simulated deepzoom pyramide does not have 17 levels anymore, but only 16. This leads to an index-out-of-bounds exception, since OpenSeadragon still is assuming 17 levels for this image. This leads to my assumption that Openslide and OpenSeadragon apparently both have the convention to calculate the pyramide with Still, it would be nice if Openslide could handle this by tolerating this pixel gap (which anyway disappears when we zoom in, since then downsample will be 1). |
jetic83 commentedApr 20, 2017
•
edited
Context
Issue type (bug report or feature request): Feature request
Operating system (e.g. Fedora 24, Mac OS 10.11, Windows 10): Win7
Platform (e.g. 64-bit x86, 32-bit ARM): 64-bit
OpenSlide version: 3.4.1
Slide format (e.g. SVS, NDPI, MRXS): svs
Details
I encountered several times a non-intuitive and slow behavior of openslide. Consider a large slide with the 3 l_dimensions stored in the file:
level 0: 55776 x 42423
level 1: 13944 x 10605
level 2: 3486 x 2651
In total, the Deepzoom simulates 17 z_dimensions:
55776 x 42423
27888 x 21212
13944 x 10606
6972 x 5303
3486 x 2652
1743 x 1326
.... x ....
Interestingly, openslide's
get_best_level_for_downsamplingreturns following slide_from_dz_level for those z_dimensions:0
0
0
1
1
2
2
2
..2
This means:
I understand the algorithm and why it is doing so: the constraint is that it samples from the next larger or equally sized level in the file.
But (3.) seems counter-intuitive:
If we would loosen the constraint and not do strictly downsample, we could sample DZ level 13944 x 10606 from level 1 (13944 x 10605) which reduces the extend of resizing.
Analogously (5.) sampling 3486 x 2652 from file level 2 (3486 x 2651).
Thus, the whole function
get_tile(...)could be much faster, since a lot of re-sizing cost vanishes.Of course, (3.) and (5.) do so since the height is 1 pixel too small to sample from the higher level. But to sample from a much larger next-lower level seems a too large punishment for this one pixel.
This problem repeats in a lot of other slides which do have those rounding issues with odd width or height.
What do you think?
Best,
Peter