-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add <source media=""> support for responsive / art direction layouts #75
Comments
There are two things to consider here:
When considering visual quality there is definitely value in mapping screen resolution to texture resolutions. With the visual quality lens mapping polygon counts to screen resolution maybe less impactful in most cases. For performance I'm unsure screen resolution is a good proxy for this determination and so adding another attribute that hints at polygon counts or texture resolutions maybe valuable. There is maybe a parallel discussion to be had in terms of Level Of Detail handling as the user zooms in on details of a model. |
Great points here! In both glTF and USD, features exist that can dramatically reduce file size, but which not all implementations may support at once. For example USD files can contain Draco compression (not implemented in most USDZ viewers yet?) and glTF files may contain Draco, Meshopt, or Basis/KTX2 compression, or basic quantization. In glTF those features are flagged as "extensions" (e.g. At least in the case of glTF, it may be helpful to have media queries detect features the browser can render. For example: <model>
<source src="assets/example.usdz" type="model/vnd.usd+zip">
<source src="assets/example.lite.glb" type="model/gltf-binary"
media="extensions: KHR_draco_mesh_compression;">
<source src="assets/example.full.glb" type="model/gltf-binary">
</model> In this case the browser can download the compressed version ( often ~95% smaller) if it supports compression, and can download the larger uncompressed file otherwise. I'm not completely sure if media queries are the right mechanism for this though. |
While I do agree with the use case, I would suggest this is in the realm of the
This is a better selection for the UA since it can reason through the supported mimetypes and the features therein to select the best source. Since that selection is not really about adapting to the presentation layer, I would argue that creating new Media Queries for browser supported features is not a good path anyway. If this is of interest to you, I'd suggest we create an ID for the IETF that mirrors rfc6381 but for [Aside: This is the biggest failure of WebP in that it failed to expose the variations of the format, so SaaS providers ultimately had to do UA sniffing to figure out whether a version of WebP could be supported by the UA. It is / was a mess since there are 3 (arguably 5) major variations that don't have clear definitions] |
I wasn't aware of that option for the |
@colinbendell I'm following up on your comment with this thread on the glTF repository. I'm not really familiar with the process for something like this so comments would be welcome! KhronosGroup/glTF#2064 |
Sorry for the radio silence. I agree completely with this proposal. |
As a content creator, I want to create an immersive experience based on the viewport (or other unique characteristics) of the UA. Like art-direction for images, I want to be able to allow the UA to select different content experiences based on CSS media queries similar to the
<picture>
<source media=
.For example, on a desktop UA with a larger display (
media="min-width: 1200"
), I might want to show a 3d model of the inside of my restaurant. And fallback to a portrait<video>
sizzle reel for my mobile users.Or, I might have different sources based on the display. For a 4K display I would want to use a src with 1 billion polygons and use the 1million polygon version as the default.
The
<source>
element for<model>
should include amedia
attribute so that the creative author can have flexibility in creative experience.Aside: while
<video>
droppedmedia
in the early days of html5, there is a renewed push to re-add this feature to better support portrait vs. landscape video content. Model should mirror the creative patterns of<picture>
to allow for the most immersive experience.The text was updated successfully, but these errors were encountered: