Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tile performance improvements for VMS and NDPI #528

Closed
wants to merge 2 commits into from

Conversation

melissalinkert
Copy link
Member

See http://trac.openmicroscopy.org.uk/ome/ticket/10682

The smallest tile that can be read from these files is usually 4096x4096, so reading smaller tiles still means decoding one or more of those large tiles and cropping out the desired area. This PR allows for caching of the last 4096x4096 tile read, which means that small tiles beyond the first (from the same 4096x4096 region) take < 5 ms instead of 300-400 ms. The first small tile from any 4096x4096 region will still take 300-400 ms though, as it must still decode the larger tile.

The optimal sizes for VMS and NDPI have been changed to 256x256, as this seemed better to me (0.25 ms per tile vs. 4.25 ms per tile with 1024x1024 tiles).

Many JPEG streams read using turbojpeg have large tile sizes (e.g.
4096x4096), so we are often requesting subtiles from the smallest tile
that can be decompressed.  Caching the last tile read improves the
performance by ~300-400 ms per tile when reading small tiles from the
same 4096x4096 region.

See #10682.
With tile caching introduced in d010c7f, we can now allow smaller
optimal tiles without a performance penalty.
@pwalczysko
Copy link
Member

  • Tried to open .vms files and ndpi and .ndpis files imported today to Gretzky. Generally it works. The tiles are smaller. Compared with a local build without this PR in (where the tiles are bigger).

Speed (oening of .vms file)

  • On Gretzky (with this PR in ) 78 sec
  • Local build (without this PR in) 65 sec

This is possibly not saying much, but not sure how to test more precisely (also not sure whether the speed-up is visible at opening of the image in full viewer).

@melissalinkert
Copy link
Member Author

@pwalczysko, are those times from web or Insight?

@pwalczysko
Copy link
Member

Insight.

@melissalinkert
Copy link
Member Author

OK. I wouldn't have expected Insight to be slower over all with this PR. The tiles are smaller (in the hopes of a better user experience), and I would expect the first tile loaded when the position is changed to be somewhat slow (half a second or so), but subsequent tiles should be much faster.

I would also expect that the timings reported here would be improved: http://trac.openmicroscopy.org.uk/ome/ticket/10682#comment:9

...so it would probably help if @will-moore could redo whichever steps were done to produce that log and see if the new log shows better performance.

@will-moore
Copy link
Member

@melissalinkert I think Insight has several rendering engines open at once. It's possible that the same 4096 x 4096 region will be opened by multiple different rendering engines to get the many different 256 x 256 tiles, so the caching may not be used very effectively? (This is just a guess). cc @jburel

In web, we only ever get one tile per rendering engine (then closes) so this strategy may not give us any improvement. Smaller tiles might mean we need to move less data to the browser (fit the viewport better) but could be worse since we need more tiles / setIds. Tricky to estimate.

However, trying this on gretzky with test_images_good/hamamatsu-vms/openslide/CMU-1/CMU-1-40x - 2010-01-12 13.24.05.vms [CMU-1-40x - 2010-01-12 13.24.05.vms full resolution]. It is possible to open in web (this failed before: see http://trac.openmicroscopy.org.uk/ome/ticket/10682#comment:5) so that is an improvement.
Other Images in the fileset seem to render quite quickly, can't tell if they're any faster than before this PR - would need to have a develop and merge builds on comparable servers with same network bandwidth etc.

@melissalinkert
Copy link
Member Author

OK. If the caching doesn't help in OMERO, then I can at least put the tile sizes back so that performance isn't any worse. I'm really not sure that what else can be done at the Bio-Formats level to make this better, though.

@joshmoore
Copy link
Member

@melissalinkert, if 1024 is more optimal outside of OMERO, then I would guess that you're fine to move it back. In general, we may want to think of the best way for OMERO to configure readers to a different default. Likely this will mean moving this logic into a TilingReader which grabs the 1024x1024 block, caches it to disk, and tells the clients that the optimal size is 256. This is listed (very vaguely) under http://trac.openmicroscopy.org.uk/ome/ticket/10812

@pwalczysko
Copy link
Member

@melissalinkert @joshmoore : Hopefully not adding to confusion: Although the times might not be as we wished, please note that this PR did bring a crucial improvement to the user's experience by enabling the user to see an image AT ALL in Web - see above comment of @will-moore . Rolling back on everything here fully would be a loss I guess - (but possibly that is not what you mean anyway).

@pwalczysko
Copy link
Member

Please note also comment on https://trac.openmicroscopy.org.uk/ome/ticket/11027#comment:2.

@melissalinkert
Copy link
Member Author

I definitely don't want to roll back the caching; regardless of whether or not that helps OMERO, it will help anyone else working with these files.

The only thing up for debate I think is how to set the optimal tile sizes. The true optimal size is 4096x4096, since we always have to decode that much; since that is technically a big image, the optimal size without this PR is 1024x1024 (i.e. the biggest tile we can have without it being a big image). With this PR, it is set to 256x256 in the hopes of providing a nicer user experience (since caching was intended to make the difference in reading time negligible).

I would lean towards 4096x4096 if caching is implemented in OMERO, or 1024x1024 if no caching is implemented.

@sbesson sbesson added the exclude label Mar 4, 2014
@sbesson
Copy link
Member

sbesson commented Mar 4, 2014

Excluded to allow #944 to be merged.

@melissalinkert
Copy link
Member Author

This is now over a year old. Closing, and we can reevaluate this issue later.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants