-
-
Notifications
You must be signed in to change notification settings - Fork 156
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
glfw: Denormalize Errors #96
Comments
Very interesting arguments. The one risk I see with error de-normalization is that we don't catch such an issue until someone's application makes its way into production, and they begin to experience a slew of Looking at the current
I think that I would feel comfortable with denormalizing all errors except those listed under "Generic errors" above. I am worried that basically any GLFW function can generate those two and we won't really know when/how unless we did a full audit of GLFW's sources But I can also see your argument here about just denormalizing everything and improving GLFW docs when we run into anything inconsistent. I think overall I am OK with either of these approaches. After reading what I wrote above, what do you think the best path forward is? |
You make a strong point about denormalization being quite potentially harmful to production. Though, I think it then comes down to a trade-off: take the risk, causing inconvenience and some struggle, with the return being an overall net-positive in the long run, or avoid taking the risk, and simply continue with status-quo behaviour being, as you've described it, aspirational, as it pertains to documented behaviour, and actual behaviour. After thinking through it for a bit, I would say I am still of the opinion that error denormalization is worth the benefits; whilst it does rock the boat, it's also a step in the same direction that pointed out the undefined behaviour that was present in GLFW for 6+ years. In an extreme case, where this change causes too much havoc, we could re-normalize errors, and we'd come back to where we are now, but with additional information about the weaknesses in GLFW's error handling model, so still a net-positive. |
Fair enough, I think you've convinced me. Denormalization of all errors as you described originally SGTM! |
Closes #96 Co-authored-by: Stephen Gutekanst <stephen@hexops.com>
Closes hexops/mach#96 Co-authored-by: Stephen Gutekanst <stephen@hexops.com>
Closes hexops/mach#96 Co-authored-by: Stephen Gutekanst <stephen@hexops.com>
Closes hexops/mach#96 Co-authored-by: Stephen Gutekanst <stephen@hexops.com>
In the original draft of #92, I added an off-the-cuff change to de-normalize error sets. As suggested by @slimsag, I'm opening an issue to discuss this change (and whether it's beneficial enough to consider adding).
Currently, we have normalized errors, adhering to the general possibility imposed by GLFW of (almost) any function returning (or rather "setting") any of the errors specified by the library. But it is important to note that GLFW does obviously specify, for each function, its "possible" error set, and according to the maintainer of GLFW, this documentation is "generally accurate" (paraphrased).
A consideration here is ABI stability: with a normalized error set, that's (almost universally) a guarantee, whilst with denormalized error sets, that's essentially impossible unless we were to freeze the GLFW version.
The other consideration here is how "comfortable" the API is: while a comment describing which errors you might encounter is alright, it is a departure from the error handling philosophy of Zig, and much more akin to C (which is what GLFW aims to cater to). Denormalized errors would slightly increase maintenance, but it would vastly improve readability, given that the errors the user would have to handle would be in the function's signature, and makes it such that a compilation error will be emitted if the user tries to catch an error which the function can't emit.
As well, another positive of denormalized errors would be behaviour-documentation parity. If GLFW changes the behaviour of one of their functions, and adds a possible error to a function, then we'll
catch unreachable
on that prong, and be able to make a PR upstream to update their docs. Net positive.I am in favour of denormalized errors.
The text was updated successfully, but these errors were encountered: