Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(core): Add float64 VertexType and VertexFormat values #1976

Closed
wants to merge 1 commit into from

Conversation

donmccurdy
Copy link
Collaborator

@donmccurdy donmccurdy commented Feb 28, 2024

Background

Currently in luma.gl and deck.gl there are main references to GL constants, particularly when declaring types for vertex attributes. In many cases, the type is 'GL.DOUBLE', used to support double-precision floating point math. In v9, we want to minimize references to GL constants, and so we may want some solution for getting these GL.DOUBLE references out of the public API.

See:

Changes

While WebGPU does not actually have any 'float64' type today, I think it might be worth exporting it from luma.gl's VertexType and VertexFormat enums anyway, so that we can use the types to replace GL references throughout the codebase without a lot of VertexType | 'float64' workarounds. Other suggestions welcome. :)

@donmccurdy
Copy link
Collaborator Author

donmccurdy commented Feb 28, 2024

This stack of pull requests is managed by Graphite. Learn more about stacking.

Join @donmccurdy and the rest of your teammates on Graphite Graphite

Copy link
Collaborator

@ibgreen ibgreen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not opposed to adding this, but it would be good to document (at least in comments for now) what this really means in terms of semantics.


In spite of my personal hopes, I am not super optimistic about float64 support being prioritized by the WebGPU team.

My understanding is that most GPUs that are not specifically marketed for high-end server-side scientific calculations with $10K+ price tags have stripped down their 64 bit cores to manage their transistor budgets.

That said, even having it supported as software emulated by WebGPU would be great.

Some info in gpuweb/gpuweb#2805

Also I was very excited to find this WGSL port of luma.gl's fp64 library!

https://github.com/clickingbuttons/jeditrader/blob/a921a0ea9dd70c9fb89186c26d7693c62f30a6cb/shaders/src/fp64.wgsl#L2

@clickingbuttons - I don't see a LICENSE file in your repo, would you mind adding one? Or at least a license header in that particular file?

We'd love to incorporate your port into the WebGPU module library in luma.gl v9, but we care a lot about properly observing licenses.

| 'float32x4';
| 'float32x4'
| 'float64'
| 'float64x2'
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe the register bank for vertex attributes is 16 bytes wide, i.e. there would only ever be float64x2 if this was actually implemented. And if we want to emulate it with 2xfloat32 per float64 we would also face this limitation (i.e. there are no float32x6 or float32x8 types.)

@donmccurdy
Copy link
Collaborator Author

@ibgreen Agreed — let's proceed as if float64 is going to remain unsupported by the graphics APIs indefinitely.

Currently deck.gl exposes APIs for float64-like behavior, so to express that API we'll either need to extend these enums here, or create new enums somewhere else. I could certainly see an argument for leaving the enums here as-is (matching the graphics APIs) but introducing superset enums in deck.gl instead. Would you prefer that approach?

In either case, I'll remove 'float64x3' and 'float64x4' from the enum, since that's probably not something we want to emulate in the deck.gl API anyway, and so there would be no use for it.

@ibgreen
Copy link
Collaborator

ibgreen commented Feb 28, 2024

Sounds good, fine to merge.

@donmccurdy
Copy link
Collaborator Author

@ibgreen
Copy link
Collaborator

ibgreen commented Feb 28, 2024

It would have to split it over multiple attributes.

I believe that for matrix attributes (4x3 floats or 4x4 floats) there was code to emulate this in luma.gl by using multiple attributes under the hood but it was rather complicated to understand and maintain.

Maybe this was using similar logic.

@Pessimistress
Copy link
Collaborator

so that we can use the types to replace GL references throughout the codebase without a lot of VertexType | 'float64' workarounds

Can you help me understand why that is the case? I assume the proposal is to change deck.gl Attribute.type and BufferAccessor.type from number (a GL constant) to VertexType | 'float64'.

There is no 1:1 mapping from deck's Attribute format to luma's VertexFormat so the size of the attribute is irrelevant. Some key logic can be found at:

https://github.com/visgl/deck.gl/blob/97334a8776c3965905373fed78f79578447a2986/modules/core/src/lib/attribute/data-column.ts#L263

https://github.com/visgl/deck.gl/blob/97334a8776c3965905373fed78f79578447a2986/modules/core/src/lib/attribute/gl-utils.ts#L49

@ibgreen
Copy link
Collaborator

ibgreen commented Feb 29, 2024

Some key logic can be found at:

Just a few thoughts:

  • Similar column handling could potentially go into luma.gl as part of e.g. built-in support for Apache Arrow.
  • It may be worth having a discussion in the Binary Working Group on that.
  • If we designed that right, we might be able to remove some complexity from deck.gl.
  • One idea for deck.gl v10 is to refactor the deck core to be optimized for for binary columns from the ground up and a luma.gl column system could be part of that.

@donmccurdy
Copy link
Collaborator Author

donmccurdy commented Feb 29, 2024

@Pessimistress @ibgreen my reason for this PR was concern that the remaining GL constants might be an obstacle to the v9.0 → v9.1 updates for WebGPU. If we'd prefer to ship v9.0 without more churn changing these constants, I'm certainly fine with that — I don't want to add more work if it's not necessary yet. Especially if v10.0 would be better timing anyway.

If we do go ahead with some form of this PR though, it sounds like changes to VertexFormat are not really needed, as deck.gl would not expose that? Instead we could define attributes like:

// before
{
  type: GL.UNSIGNED_BYTE,
  normalized: true,
  size: 3,
}

// after
{
  type: 'unorm8',
  size: 3,
}

luma.gl's TextureProps definition contains a type prop, accepting a GL constant, as well.

@ibgreen
Copy link
Collaborator

ibgreen commented Feb 29, 2024

luma.gl's TextureProps definition contains a type prop, accepting a GL constant, as well.

That will be removed (thought I had already removed that).

Instead we could define attributes like:

Why not go all the way?

// before
{
  type: GL.UNSIGNED_BYTE,
  normalized: true,
  size: 3,
}

// after
{
  type: 'unorm8x3',
}

By the way unorm8x3 is not currently supported in WebGPU (not aligned to two bytes) though I believe I saw a discussion about relaxing this restriction.

@Pessimistress
Copy link
Collaborator

Why not go all the way?

Because deck Attribute size may not match shader attribute size. E.g. ScenegraphLayer has an attribute for instance model matrix of size 12.

@ibgreen
Copy link
Collaborator

ibgreen commented Feb 29, 2024

Because deck Attribute size may not match shader attribute size. E.g. ScenegraphLayer has an attribute for instance model matrix of size 12.

Yes. It could be worth noting that in WebGPU we can use "storage buffers" that are not limited to reading just the current row (the way that vertex attributes are). In such buffers, we could support more than 16 bytes per row, and we could even support variable length rows.

It could be interesting to consider how we would design things if we adopted storage buffers.

@donmccurdy
Copy link
Collaborator Author

Looking only at the 8-bit VertexFormat values for a moment. Here are luma.gl's values...

  | 'uint8x2'
  | 'uint8x4'
  | 'sint8x2'
  | 'sint8x4'
  | 'unorm8-webgl'
  | 'unorm8x2'
  | 'unorm8x3-webgl'
  | 'unorm8x4'
  | 'snorm8-webgl'
  | 'snorm8x2'
  | 'snorm8x3-webgl'
  | 'snorm8x4'

...but for the purposes of deck.gl, we'd probably want...

  | 'uint8'
  | 'uint8x2'
  | 'uint8x3'
  | 'uint8x4'
  | 'sint8'
  | 'sint8x2'
  | 'sint8x3'
  | 'sint8x4'
  | 'unorm8'
  | 'unorm8x2'
  | 'unorm8x3'
  | 'unorm8x4'
  | 'snorm8'
  | 'snorm8x2'
  | 'snorm8x3'
  | 'snorm8x4'

... if not more for sizes >4, as @Pessimistress mentions. I don't recall if single-component 8-bit attributes are accepted in WebGL at all, certainly they are discouraged, so the -webgl suffix may get confusing in the higher-abstraction API of deckgl.

I think I'm persuaded that VertexFormat should not be part of deck.gl's public API. Perhaps VertexType + size, or perhaps a larger enum like Arrow's types. But if the remaining GL constants in the v9 API are fine, that decision isn't pressing.

@ibgreen
Copy link
Collaborator

ibgreen commented Mar 14, 2024

Closing for now, let's reopen if we come to a good conclusion.

@ibgreen ibgreen closed this Mar 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants