-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Arithmetic broken throughout the spec #1964
Comments
@waldemarhorwat I'd like to make sure these bugs are fixed. You've noted a number of changes that were the broad strokes of the spec change, which were on purpose. You say you've found hundreds of bugs--could you note some of these concretely so we could go about fixing them? We could always fix a particular error by using the R subscript when appropriate--the change specifically left that possibility open. |
Also, if you have other ideas for how we should phrase things in the specification, then I'd be interested to hear them, so we can fix all errors while keeping BigInt in the specification. |
@littledan The best way to fix these is to revert #1135 and then explicitly mark the places where Number, Decimal, etc. arithmetic should be used. |
@waldemarhorwat Thanks for filing #1960. It would be a big help if you could give a couple other examples of bugs introduced, or advice for finding them. In the thread for #1135 we saw advice from @domenic which was part of what led to the current outcome; it would be helpful to hear why you disagree with his rationale. Something that makes me uncomfortable about referring to real numbers in the JS spec all the time is how broad/unrepresentable they are. There are just so many of them! It seems easier to work with integers, which there are fewer of. Especially with BigInt landing, we could now note that many things are integers. I think the great majority of the spec's mathematical values are integers, and would be accurately represented this way. What would you think of noting this more broadly in algorithms, maybe even using integers as the default numerical type? (Tangent: Looks like we forgot to include BigInt in our list of numerical types in the notational conventions section; we'd better fix that, maybe even backported to ES2020.) |
@littledan: The difference between countable infinity (of integers) and uncountable infinity (of real numbers) has no bearing on anything we currently or in the foreseeable future do in the spec. There is no way to generate or store an arbitrary real number, but they're very useful as intermediate results in various formulas such as computing 10n given an integer n. |
@littledan: I gave a number of examples in the initial post of this bug. Let me try to explain one of them again: the spec uses things like 10n and 2n in various places, where n is an integer (or untyped). When #1135 quietly redefined those to mean Number values, do you see how all of these got broken? For example, consider what happens when n is -1, 30, or 10000. Another sea of bugs is the unclear treatment of ±0. Another problem is that I can't tell whether the formulas in the various year and month calculations are supposed to be done using Number-style rounding or to work on true integers. I'm pretty sure they weren't designed to account for rounding. |
Speaking only for myself, my preference is to use mathematical values as the default, and to perform arithmetic only on mathematical values. There's no avoiding them; arithmetic on IEEE 754 doubles is defined in terms of mathematical values anyway. And for spec-internal things like counters, I expect those to be about the actual number of times something happens, not about a bit pattern. I certainly do not expect "1 + 2 = 3" to be a statement about Numbers; I would be very surprised to see the claim "2^100 = 2^100 + 1" in a specification. It is not practical to |
@bakkot: I agree with you. By "revert" I don't literally mean |
Yes, though specifically I want to call out that that prior to #1135 the specification said
and my preference is to instead keep the current specification's requirement that all conversion between Numbers or BigInts and mathematical values be explicit. |
To recap, first, I think we all agree among the editors and with @waldemarhorwat that for correctness, mathematical operations should be applied only on mathematical values, and inputs must be converted to mathematical values, taking care of the IEEE floating point edge cases. I have weak agreement with @bakkot that the conversions be kept explicit -- but proof is in the pudding. The current hunch is that the places that require explicit conversions are not hard to enumerate and also not too many. |
@bakkot: I agree about all conversions from real numbers or integers to Numbers being explicit. The other direction is much less harmful and we'd have to see how practical it is to make it explicit vs implicit. Prior to #1135 we had implicit conversions from Numbers to real numbers. After #1135 much of the spec was still written assuming such implicit conversions and became broken due to the semantics imposed by #1135. |
OK, I'm fine with changing the go-to/default numerical type in the spec to mathematical values. The thing that's really important to me is that we keep the conversions explicit. This convention will mean that we need a lot of explicit conversions, but I've long had a low-key fear that we were somehow "leaking" mathematical values out into the JS world, and explicit conversions prevent that. And, with multiple numerical types, there's no clear way that implicit conversions should work, I suspect. |
Yay!!! |
After doing more analysis of bugs #1960 and #1962, I've found hundreds of other bugs in the spec introduced by pull request #1135 and have come to the conclusion that we need to revert #1135. It's impractical to find and fix them all without a large effort, and approximately no one reading or writing the spec seems to realize the consequences of what this did:
The correct approach is to revert #1135 and restore the commonly understood mathematical meaning of formulas. Number, Decimal, etc. arithmetic should be done explicitly and only one operation at a time; in most cases it's just a single addition or subtraction. We should never use notation such as ab on Numbers.
The text was updated successfully, but these errors were encountered: