-
Notifications
You must be signed in to change notification settings - Fork 212
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(core): Add float64 VertexType and VertexFormat values #1976
Conversation
This stack of pull requests is managed by Graphite. Learn more about stacking. Join @donmccurdy and the rest of your teammates on Graphite |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not opposed to adding this, but it would be good to document (at least in comments for now) what this really means in terms of semantics.
In spite of my personal hopes, I am not super optimistic about float64
support being prioritized by the WebGPU team.
My understanding is that most GPUs that are not specifically marketed for high-end server-side scientific calculations with $10K+ price tags have stripped down their 64 bit cores to manage their transistor budgets.
That said, even having it supported as software emulated by WebGPU would be great.
Some info in gpuweb/gpuweb#2805
Also I was very excited to find this WGSL port of luma.gl's fp64 library!
@clickingbuttons - I don't see a LICENSE file in your repo, would you mind adding one? Or at least a license header in that particular file?
We'd love to incorporate your port into the WebGPU module library in luma.gl v9, but we care a lot about properly observing licenses.
| 'float32x4'; | ||
| 'float32x4' | ||
| 'float64' | ||
| 'float64x2' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe the register bank for vertex attributes is 16 bytes wide, i.e. there would only ever be float64x2
if this was actually implemented. And if we want to emulate it with 2xfloat32 per float64 we would also face this limitation (i.e. there are no float32x6 or float32x8 types.)
@ibgreen Agreed — let's proceed as if float64 is going to remain unsupported by the graphics APIs indefinitely. Currently deck.gl exposes APIs for float64-like behavior, so to express that API we'll either need to extend these enums here, or create new enums somewhere else. I could certainly see an argument for leaving the enums here as-is (matching the graphics APIs) but introducing superset enums in deck.gl instead. Would you prefer that approach? In either case, I'll remove 'float64x3' and 'float64x4' from the enum, since that's probably not something we want to emulate in the deck.gl API anyway, and so there would be no use for it. |
Sounds good, fine to merge. |
Ah, looks like deckgl does emulate |
It would have to split it over multiple attributes. I believe that for matrix attributes (4x3 floats or 4x4 floats) there was code to emulate this in luma.gl by using multiple attributes under the hood but it was rather complicated to understand and maintain. Maybe this was using similar logic. |
Can you help me understand why that is the case? I assume the proposal is to change deck.gl There is no 1:1 mapping from deck's |
Just a few thoughts:
|
@Pessimistress @ibgreen my reason for this PR was concern that the remaining GL constants might be an obstacle to the v9.0 → v9.1 updates for WebGPU. If we'd prefer to ship v9.0 without more churn changing these constants, I'm certainly fine with that — I don't want to add more work if it's not necessary yet. Especially if v10.0 would be better timing anyway. If we do go ahead with some form of this PR though, it sounds like changes to // before
{
type: GL.UNSIGNED_BYTE,
normalized: true,
size: 3,
}
// after
{
type: 'unorm8',
size: 3,
} luma.gl's |
That will be removed (thought I had already removed that).
Why not go all the way? // before
{
type: GL.UNSIGNED_BYTE,
normalized: true,
size: 3,
}
// after
{
type: 'unorm8x3',
} By the way |
Because deck Attribute size may not match shader attribute size. E.g. ScenegraphLayer has an attribute for instance model matrix of size 12. |
Yes. It could be worth noting that in WebGPU we can use "storage buffers" that are not limited to reading just the current row (the way that vertex attributes are). In such buffers, we could support more than 16 bytes per row, and we could even support variable length rows. It could be interesting to consider how we would design things if we adopted storage buffers. |
Looking only at the 8-bit VertexFormat values for a moment. Here are luma.gl's values...
...but for the purposes of deck.gl, we'd probably want...
... if not more for sizes >4, as @Pessimistress mentions. I don't recall if single-component 8-bit attributes are accepted in WebGL at all, certainly they are discouraged, so the I think I'm persuaded that |
Closing for now, let's reopen if we come to a good conclusion. |
Background
Currently in luma.gl and deck.gl there are main references to GL constants, particularly when declaring types for vertex attributes. In many cases, the type is 'GL.DOUBLE', used to support double-precision floating point math. In v9, we want to minimize references to GL constants, and so we may want some solution for getting these GL.DOUBLE references out of the public API.
See:
Changes
While WebGPU does not actually have any 'float64' type today, I think it might be worth exporting it from luma.gl's VertexType and VertexFormat enums anyway, so that we can use the types to replace GL references throughout the codebase without a lot of
VertexType | 'float64'
workarounds. Other suggestions welcome. :)