Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

glfw: Denormalize Errors #96

Closed
InKryption opened this issue Nov 23, 2021 · 3 comments · Fixed by #115
Closed

glfw: Denormalize Errors #96

InKryption opened this issue Nov 23, 2021 · 3 comments · Fixed by #115

Comments

@InKryption
Copy link
Contributor

In the original draft of #92, I added an off-the-cuff change to de-normalize error sets. As suggested by @slimsag, I'm opening an issue to discuss this change (and whether it's beneficial enough to consider adding).

Currently, we have normalized errors, adhering to the general possibility imposed by GLFW of (almost) any function returning (or rather "setting") any of the errors specified by the library. But it is important to note that GLFW does obviously specify, for each function, its "possible" error set, and according to the maintainer of GLFW, this documentation is "generally accurate" (paraphrased).

A consideration here is ABI stability: with a normalized error set, that's (almost universally) a guarantee, whilst with denormalized error sets, that's essentially impossible unless we were to freeze the GLFW version.

The other consideration here is how "comfortable" the API is: while a comment describing which errors you might encounter is alright, it is a departure from the error handling philosophy of Zig, and much more akin to C (which is what GLFW aims to cater to). Denormalized errors would slightly increase maintenance, but it would vastly improve readability, given that the errors the user would have to handle would be in the function's signature, and makes it such that a compilation error will be emitted if the user tries to catch an error which the function can't emit.

As well, another positive of denormalized errors would be behaviour-documentation parity. If GLFW changes the behaviour of one of their functions, and adds a possible error to a function, then we'll catch unreachable on that prong, and be able to make a PR upstream to update their docs. Net positive.

I am in favour of denormalized errors.

@slimsag
Copy link
Member

slimsag commented Nov 23, 2021

Very interesting arguments.

The one risk I see with error de-normalization is that we don't catch such an issue until someone's application makes its way into production, and they begin to experience a slew of unreachable panics on rarer system configurations (e.g. Wayland, obscure Linux distros/GPU drivers, etc.)

Looking at the current Error set and trying to group them into logical sets, we have:

  • Can likely be eliminated entirely:
    • InvalidEnum
    • InvalidValue
  • Graphics errors:
    • NoCurrentContext
    • APIUnavailable
    • VersionUnavailable
    • NoWindowContext
  • Window creation and clipboard interaction:
    • FormatUnavailable
  • Generic errors:
    • OutOfMemory
    • PlatformError

I think that I would feel comfortable with denormalizing all errors except those listed under "Generic errors" above. I am worried that basically any GLFW function can generate those two and we won't really know when/how unless we did a full audit of GLFW's sources

But I can also see your argument here about just denormalizing everything and improving GLFW docs when we run into anything inconsistent. I think overall I am OK with either of these approaches. After reading what I wrote above, what do you think the best path forward is?

@InKryption
Copy link
Contributor Author

You make a strong point about denormalization being quite potentially harmful to production. Though, I think it then comes down to a trade-off: take the risk, causing inconvenience and some struggle, with the return being an overall net-positive in the long run, or avoid taking the risk, and simply continue with status-quo behaviour being, as you've described it, aspirational, as it pertains to documented behaviour, and actual behaviour.

After thinking through it for a bit, I would say I am still of the opinion that error denormalization is worth the benefits; whilst it does rock the boat, it's also a step in the same direction that pointed out the undefined behaviour that was present in GLFW for 6+ years.

In an extreme case, where this change causes too much havoc, we could re-normalize errors, and we'd come back to where we are now, but with additional information about the weaknesses in GLFW's error handling model, so still a net-positive.

@slimsag
Copy link
Member

slimsag commented Nov 24, 2021

Fair enough, I think you've convinced me. Denormalization of all errors as you described originally SGTM!

@InKryption InKryption changed the title glfw: Normalized vs Denormalized Errors glfw: Denormalize Errors Nov 26, 2021
slimsag added a commit that referenced this issue Dec 7, 2021
Closes #96

Co-authored-by: Stephen Gutekanst <stephen@hexops.com>
slimsag added a commit to slimsag/mach-glfw that referenced this issue Dec 7, 2021
Closes hexops/mach#96

Co-authored-by: Stephen Gutekanst <stephen@hexops.com>
@slimsag slimsag added this to the Mach 0.1 milestone Mar 19, 2022
slimsag added a commit to slimsag/mach-glfw that referenced this issue Aug 26, 2022
Closes hexops/mach#96

Co-authored-by: Stephen Gutekanst <stephen@hexops.com>
slimsag added a commit to slimsag/mach-glfw that referenced this issue Aug 26, 2022
Closes hexops/mach#96

Co-authored-by: Stephen Gutekanst <stephen@hexops.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants