Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

scale-specific tile sizes #23

Closed
jpstroop opened this issue Feb 26, 2014 · 5 comments
Closed

scale-specific tile sizes #23

jpstroop opened this issue Feb 26, 2014 · 5 comments

Comments

@jpstroop
Copy link
Member

Need to get to the bottom of this, but there is at least one case, possibly two:

  • If a JP2 uses precincts, it likely has a different tile size for each decomposition level
  • If a JP2 uses tiles, the tiles may (I'm not 100% sure, but I think so) scale down at the same rate

Should this be reflected in info.json?

@jpstroop jpstroop modified the milestone: Release 1.2 Feb 26, 2014
@jpstroop
Copy link
Member Author

jpstroop commented Mar 1, 2014

This is a note I sent to Tony Calavano @ SUL:

Tony,
I have a funny feeling like you may be able to help me more that I can help you at the moment, and that the problems Stu alluded to yesterday may be related. I'm no JP2 expert by any stretch, as you'll soon see.

A couple of weeks ago Ben was having a problem with OpenSeadragon and I think what we realized is that JP2s that have precincts often have a tile size that is equal to the full image (is this always true, or could one also specify Stiles={w,h} ?), and when I parse the JP2 header in Loris I'm only looking at tile size[1].

I think I can adjust Loris to read in the precinct sizes without much difficulty, (if you have a copy of the JP2 spec, see table A.15 on pg. 24). But is there is a precinct size for each decomposition level? Or are the multiple arguments I see passed to Cprecincts in various recipes related to the quality layers? Or something else?

It comes down to this: if you have a JP2 with precincts, what do you need to server to report? The IIIF info.json data structure looks like this[2], and my concern is that we actually need to be reporting different tile sizes for each scale factor, which would be impossible right now. Or is the structure OK, and would you just need the first precincts parameter?

Advice? Thanks,
Jon

@azaroth42
Copy link
Member

Any progress on this? It would be a big change that we should make ASAP if necessary.

@jpstroop
Copy link
Member Author

Here is Tony's response:

I just double checked our code, and we do not explicitly define a tile size. It looks like Kakadu defines the tile size as the size of the image in the jp2 when using precincts. You are able to have a jp2 with both tiles and precincts of different sizes, but I have not spent too much time with this beyond creating massive images.

There is a precinct size per resolution level. The size depends on how the jp2 was created (at least with Kakadu). We use Cprecincts={256,256},{256,256},{128,128}. This gives the two highest resoulution levels a precinct size of 256, and the rest a size of 128. I'd have to dig through our documentation to find the reasoning, I can't recall it off the top of my head.

I believe that we would potentially only need the tile size in info.json to report the highest resolution precinct size.

I think he's correct...if we're saying you need to get the region, then scale, then rotate...you only really care about the tile size at the highest resolution, right? Or am I thinking too much like an implementer?

@azaroth42
Copy link
Member

Well, the use case is to get the best results from the server. If the server defines 256x256 at one resolution and 128x128 at a lower resolution (as above), we can't specify that in info.json and the server will have to retrieve 4 128x128 tiles to build the 256x256 one requested.

My feeling is that with the scale_factor -> h/w math, we're going to end up in a LOT of pain trying to actually get this right. However an implementation note saying that tile sizes at different resolutions should be the same for optimal performance of this API might not go amiss?

@azaroth42
Copy link
Member

Too low down the stack to fix.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants