Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Attrib.Value is int for webgl and uint for opengl? #26

Open
swiftcoder opened this issue Oct 19, 2017 · 5 comments
Open

Attrib.Value is int for webgl and uint for opengl? #26

swiftcoder opened this issue Oct 19, 2017 · 5 comments

Comments

@swiftcoder
Copy link

In attempting to cross-compile a program for both WebGL and Desktop OpenGL, I've noticed that the Attrib type is uint for the former, and int for the latter. This makes it a little hard to manipulate these in a portable way.

Is there a technical reason why these are different, or could they be brought into alignment?

@dmitshur
Copy link
Member

dmitshur commented Oct 19, 2017

Hi, thanks for reporting this.

This makes it a little hard to manipulate these in a portable way.

gl.Attrib should be an opaque type. You're likely to get one via gl.GetAttribLocation. You can check if it's valid via its Valid method, and use it with gl.EnableVertexAttribArray, gl.VertexAttribPointer, etc. All those steps use/manipulate gl.Attrib values directly, without needing to know its representation, and can be done in a portable way.

Can you please elaborate on how you're trying to manipulate them?

That would help me better understand if there's a problem with some use-case I'm not aware of, or perhaps what you're doing can be done differently.

Thanks!

@swiftcoder
Copy link
Author

gl.BindAttribLocation is the unfortunate edge case here. It's idiomatic in modern OpenGL to explicitly bind attribute locations before program linking, to ensure that all shaders in your application use a consistent numbering for attributes.

For example, my current application does the following:

		gl.BindAttribLocation(program, gl.Attrib{0}, "position")
		gl.BindAttribLocation(program, gl.Attrib{1}, "normal")
		gl.BindAttribLocation(program, gl.Attrib{2}, "color")

To ensure that I can switch between multiple shaders without changing the vertex attribute pointers of my meshes.

This becomes especially important once we have support for Vertex Array Objects in WebGL 2.0, since then one bakes the vertex attribute pointers once at mesh creation, and never change them again.

@dmitshur
Copy link
Member

dmitshur commented Oct 19, 2017

Thanks, that's helpful. It sounds like you still need the ability to create ad-hoc gl.Attrib values and use them as indices.

The code you posted should work in a cross-platform compatible way. Can you show me some problem code where the different underlying types are causing problems? That way, I can see if a potential change would resolve your problem.

I'm guessing you're trying to extract/save/load the gl.Attrib.Value into a int or uint. I wonder if that can be worked around by using gl.Attrib as your intermediate storage type. Seeing the problem code will be helpful.

After looking a bit more into the specs for OpenGL and WebGL, it looks like GLuint is used when attrib index is the input parameter, but GLint is used as return value from glGetAttribLocation. I might experiment with changing the gl.Attrib.Value type to uint and factoring it out to be common across all 3 implementations... But, I'll wait to see your problem code before considering changes.

@swiftcoder
Copy link
Author

I've moved to passing around the gl.Attrib values directly, which works, but I'd love to be able to pass around the indices by themselves, to avoid exposing gl.* types in my own API.

As for the int/uint difference, glGetAttribLocation only returns int because it uses negative values to indicate errors. It should be entirely safe to cast the return value to uint after checking for errors (i.e. negative values).

I'll try and get some code up over the weekend to illustrate my usage a little more clearly. Thanks!

@dmitshur
Copy link
Member

I've moved to passing around the gl.Attrib values directly, which works, but I'd love to be able to pass around the indices by themselves, to avoid exposing gl.* types in my own API.

I see.

As for the int/uint difference, glGetAttribLocation only returns int because it uses negative values to indicate errors. It should be entirely safe to cast the return value to uint after checking for errors (i.e. negative values).

Yep, that was the conclusion I came to as well. 👍 Hence it should be possible to use uint consistently. I'm willing to give it a shot on a branch, but want to do more thinking before deciding to merge to master.

I'll try and get some code up over the weekend to illustrate my usage a little more clearly.

All right. But no need to spend a lot of time on it, if you could just copy/paste the relevant lines, that'd be good enough. It doesn't need to be a fully standalone program, just a relevant snippet would do. Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants