Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a pixelSize parameter to the map endpoints #59

Closed
pomakis opened this issue Dec 18, 2020 · 9 comments
Closed

Add a pixelSize parameter to the map endpoints #59

pomakis opened this issue Dec 18, 2020 · 9 comments

Comments

@pomakis
Copy link

pomakis commented Dec 18, 2020

Both the Styled Layer Descriptor Implementation Specification version 1.0.0 (OGC 02-070), section 10.2, and the Symbology Encoding Implementation Specification version 1.1.0 (OGC 05-077r4), section 10.2, specify that the scale denominators within the scale rules of a style should be interpreted with respect to a standardized rendering pixel size of 0.28mmm and that the rendering engine should adjust for the actual pixel size of the display device if known. It also makes sense to interpret pixel-unit lengths (such as line thicknesses) and font sizes, etc., with respect to the standardized rendering pixel size and adjust for the actual pixel size.

As an example of why this is necessary, consider two side-by-side display devices, one a 1080p (1920x1080) monitor and the other an 8K (7680x4320) monitor, both physically the same size and both displaying a map of the same physical size. If they're both showing the same spatial area of the same map layer. It would be expected that the two maps would look similar. However, if the actual resolutions of the display devices aren't taken into account, the maps may end up looking very different.

For example, the two maps requested by the above example clients might be:

https://test.cubewerx.com/cubewerx/cubeserv/demo/ogcapi/Daraa/collections/TransportationGroundCrv/styles/Topographic/map?bbox=36.098,32.617,36.120,32.631&crs=http://www.opengis.net/def/crs/OGC/1.3/CRS84&width=400

https://test.cubewerx.com/cubewerx/cubeserv/demo/ogcapi/Daraa/collections/TransportationGroundCrv/styles/Topographic/map?bbox=36.098,32.617,36.120,32.631&crs=http://www.opengis.net/def/crs/OGC/1.3/CRS84&width=1600

When the second image is viewed at the same physical size as the first (e.g., by scaling it down to 25% to simulate the higher resolution), you'll notice that the text is too small to read and the minor roads are so thin that they're almost invisible.

To resolve this issue, back in 2006 and 2007 the WMS working group discussed adding a pixelSize parameter to the WMS GetMap operation so that the client could communicate the actual pixel size of the display device to the server. This parameter was slated for inclusion in the next version of WMS. Unfortunately, there never was a next version. I figured that now is the right time to re-propose it for inclusion in "OGC API - Maps".

CubeWerx has implemented this parameter for testing and demonstration purposes. So we can show the power of this parameter by example. When the two map requests above include a pixelSize parameter, the results are visually identical when viewed at the same physical size:

https://test.cubewerx.com/cubewerx/cubeserv/demo/ogcapi/Daraa/collections/TransportationGroundCrv/styles/Topographic/map?bbox=36.098,32.617,36.120,32.631&crs=http://www.opengis.net/def/crs/OGC/1.3/CRS84&width=400&pixelSize=0.28

https://test.cubewerx.com/cubewerx/cubeserv/demo/ogcapi/Daraa/collections/TransportationGroundCrv/styles/Topographic/map?bbox=36.098,32.617,36.120,32.631&crs=http://www.opengis.net/def/crs/OGC/1.3/CRS84&width=1600&pixelSize=0.07

The only difference is that the second user will enjoy a crisper image thanks to their higher-resolution monitor.

(As an aside, the CubeWerx server will also accept a parameter called "dpi" that has the same effect, but just using different units. I figured I'd throw that out there in case others feel that DPI is a better unit of measure for this than pixel size in millimetres.)

@jeffharrison
Copy link

Great post, and great examples.

I hope OGC API - Maps can include a pixelSize parameter.

@chris-little
Copy link

chris-little commented Jan 6, 2021

Agreed they are good examples, but why are we still incorporating into modern geospatial API interoperability standards a concept that was made redundant by international scalable graphics standards in 1985, and are now handled in GPU hardware on my rather old mobile phone?
I agree that backward compatibility is desirable, but surely a new web based map API is an opportunity to ditch this quaint constraint and terminology. And if not now, when?
The graphics standards used the term 'cell array' for a regular grid of 'virtual pixels' that could be transformed along with all the other vector based graphical objects using the same transformation matrices, including projection to allow embedding into a 3D scene.

I agree that a 'scale of interest' is a useful concept for a map service, but if someone wants very detailed control of the appearence of a map, a general web-based API is not appropriate. What are the use cases for the API-Maps?

@pomakis
Copy link
Author

pomakis commented Jan 6, 2021

The scalable graphics approach is great when the client is receiving vectors from the server. But many existing (and presumably future) use cases for WMS and OGC API - Maps involve the client receiving fully-rendered images. Your comment implies that this is an antiquated approach, but it still has its place. If dense data sets with complex symbology are involved, the server may be able to perform and optimize rendering in ways that are beyond the client's capabilities. It also allows for very simple clients.

As long as returning fully-rendered image maps remains a focus in OGC API - Maps, we need to endeavour to resolve the problems that are inherent to that approach. One such problem is the inability of the server to properly cater to various display-pixel densities. Fortunately, the solution to this is as easy as adding a pixelSize parameter. It should be defined with "best-effort" semantics so that the client can always safely pass it to the server (if such information is available) and the server is allowed to ignore it if it's incapable of that flexibility. The addition of this parameter therefore places no extra implementation requirements on either the client or the server, yet solves one of the problems that are inherent to returning fully-rendered image maps.

@pomakis
Copy link
Author

pomakis commented Jan 6, 2021

One alternative to a pixelSize parameter that may be worth considering is the DPR (device pixel ratio) request header proposed by

https://developers.google.com/web/fundamentals/performance/optimizing-content-efficiency/client-hints

One of the complications with this request header, though, is that it assumes a standardized rendering pixel size of 1/96" as defined by CSS rather than 0.28mm as defined by SLD. It's unfortunate that SLD didn't align itself with CSS in this way, but it is what it is.

@chris-little
Copy link

chris-little commented Jan 6, 2021

@pomakis So really what is required is a hint, or two, from the client to the server. I am not sure that a simple pixel size or ratio will encompass enough use cases:
Firstly the client: my phone in front of my nose, a bigger screen on a desk or somewhere, a video wall composed of many HDR screens or IMAX display, or a head up display;
Secondly, the map content, perhaps in real physical units, or say, same as a satellite image resolution.
As as naive map user, I could relate to those.
As an expert (which I am not) on the server side, that should be enough to construct a map for a request.
Apologies if this is old ground.

@jerstlouis
Copy link
Member

jerstlouis commented Jul 15, 2022

@pomakis We started discussing that again at the meeting this week, as we came across cell-size parameter in the current draft.

However, I think that both cellSize and pixelSize are confusing (is it its display size, or the real-world size it represents?).
cellSize has a completely different meaning in the 2DTMS standard, where it is the CRS units / cell, either for a particular scale denominator (which currently assumes a standardized 0.28 mm/pixel theoretical display pixel size) or in TileSet metadata, the minimum such cellSize for a particular data collection at the most detailed available resolution. It corresponds to the resolution in CIS.

I think dpi (dots per inch) or ppi (pixels per inch), or ppcm (pixels per centimetre) would be a preferable parameter name (see Pixel density). It is interesting to note that dpi does not necessary correspond to ppi, in cases where multiple dots (e.g., red, green and blue in LCDs, or ink dots in printers) are needed to represent a single pixel on some devices.

ppcm might be a good choice considering that the historical OGC convention has been metric, and there there is an on-going effort to adopt metric units. When omitted, this parameter would default to ~35.71 ppcm (0.028 cm/pixels) which corresponds to ~90.70 ppi, somewhere in between the 72 ppi of the Apple Macintosh default (also the typographical points) and the Windows default of 96 ppi.

From a ppcm and the width and height dimensions of the map response, a scale denominator can be inferred allowing for properly evaluating scale selectors in styling rules, as well as to specify sizes of fonts or symbols in units depending on the displayed size (rather than the spatial/physical or pixel size), like em (1 em = 12 points = 1/6 inch = 0.42333 cm), or mm/cm clearly identified as target display size (as opposed to scaled physical units of the real world). The new symbology model should be able to accommodate those units in ParameterValues, for stroke widths and graphic symbols in particular.

It could also be mmpp (millimeters per pixel) to have the familiar nice round 0.28 mm/pixel as the default, though that might not be commonly used outside digital cartography. Or the longer mmPerPixel?

Note that this is somewhat related to opengeospatial/2D-Tile-Matrix-Set#29 which proposes to separate tiling grids from tile data resolution. But in this context, perhaps there are even 2 different resolutions to consider: resolution of the source data, and resolution of the target device where the tiles will be displayed.

@joanma747
Copy link
Contributor

joanma747 commented Aug 12, 2022

Can we take a decision on this one?. We should not confuse the scaleDenominator with this one. In that respect I find mmPerPixel very confusing. I'll will go for "dpi" that is a term that is connected to printers in my brain (sorry about the ignoring the metric units effort).

@jerstlouis
Copy link
Member

jerstlouis commented Aug 12, 2022

@joanma747 I think I have a strong preference for mmPerPixel.

Dot (dpi) is actually incorrect because as the sources I cited mention, you can have multiple dots per pixels. so it is not accurate.
So we have a need to talk about pixels one way or another, and using metric units is probably more future-proof. One day perhaps even Americans will use the metric system ;)

mmPerPixel is also self-explanatory and very difficult to misunderstand.

The only way it could be confused with scale denominator (or cellSize) is if you were to somehow think that each pixel on the screen corresponds to just a few millimeters in the real world. That is extremely unlikely unless you have extremely high resolution images tracking ants or cellular biology. If this is still a concern, we could make it longer, something like mmPerDisplayPixel?

@joanma747
Copy link
Contributor

Applied by @jerstlouis and checked in a telco.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants