gl_TRUE is not a GLboolean #15

Closed
thiago-negri opened this Issue Jan 15, 2013 · 5 comments

Projects

None yet

4 participants

@thiago-negri

I can't use gl_TRUE where a GLboolean is expected.
Is that right?

@dagit
Haskell OpenGL member

Good point. gl_TRUE is currently a GLenum while GLboolean is CUChar. A fromIntegral around gl_TRUE should fix it for your immediate needs, but I'll have to glance at the spec to see figure out the long term solution.

@Laar

The spec specifies that a GLboolean should have at least a bit width of 1, but it may be larger. A simple test shows (as far as my c knowledge goes) that the sizeof(GLboolean) == 1.

@thiago-negri

The 3.2 spec says at page 16: GLboolean should have at least 1 bit and GLenum should have at least 32 bits.

At page 213, the function glClampColor is presented as: void ClampColor( enum target, enum clamp );. Just below it says that clamp may be GL_TRUE or GL_FALSE. That's an example that the spec allows the use of a GLboolean where a GLenum is specified. After all, 1-bit fits into a 32-bits box.

On the other hand, using a GLenum where a GLboolean is specified should not be allowed.

@Laar

The 3.2 spec specifies (section 6.1.2 on page 248) that when a boolean variable is queried by one of the other numerical query functions (e.g. GetIntegerv) the value will be converted to 0 (GL_FALSE) or 1 (GL_TRUE). Therefore you could argue that it would be reasonable to let it range over all numbers, thus GL_FALSE, GL_TRUE :: Num a => a.

@svenpanne
Haskell OpenGL member

The underlying problem is that the OpenGL C API is not very well-typed. Sometimes GLboolean would be the right thing (e.g. for the return value of glIsShader), sometimes it would be GLenum (see e.g. glClampColor), sometimes it would be GLint (for integer queries), etc.

So whatever Haskell type we choose for GL_TRUE/GL_FALSE, it will be wrong for some use cases. This is true for lots of other OpenGL tokens, too, so I don't consider a heavy use of type classes a real solution, it would just be an ad-hoc mess with no real purpose. You have to look at the OpenGL spec when you use OpenGLRaw, anyway, so just use fromIntegral or some other (un-)marshaling function as appropriate.

The real value of OpenGLRaw is getting the API entries right and the values of the tokens, good typing is simply not possible here and should be handled in another layer.

@svenpanne svenpanne closed this Aug 14, 2013
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment