-
-
Notifications
You must be signed in to change notification settings - Fork 2.2k
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Warn about unrepresentable float literals in compile time #9915
Comments
May be a solution have a decimal numbers module as you suggest |
@penguindark decimal numbers module or not - this issue is about real number notation (i.e. number literals with This being said just having a decimal numbers module alone wouldn't change anything about this situation if the default would stay with implicit lossy conversion to binary float without a warning. |
And how is V any worse than C or almost every other language, that will "silently" convert If you change the default way this works (even if just adding a WARNING every time it happens), you are likely going to annoy a lot of people. You may also make it harder if not impossible to support C interop. A decimal numbers module would be there for those who wish the extra precision, etc., but by default, I think it works as it should. Remember that C interop (and how easy it is) is one of V's main features. |
Calm down. No need to jump to (very premature and most probably wrong) conclusions just because some dumblob is a long-time advocate for float safety 😉. There was a long discussion(s) about this and apart from many non-consensual ideas and proposals there was actually one particularly important consensus. Namely that it's not about technical compatibility but about education & healthy mindset shift. I feel your pain shifting the mindset - I've been through the same process once. Now to your points.
What will an answer to this question change on the fact that it's a huge unsafe lie to see Anyway the answer is, that those languages you're pointing at (e.g. C) have long history unlike V. Otherwise they'd do the same, but they can't any more. It's only about the historical decision when decimal arithmetic didn't exist and the goals were very different. So V is strictly worse in this because it's a new language not forced to do the same decision but you seem to be advocating to do it anyway with no objective reason (read further).
Both claims seem to be objectively void. V made a decision that all incompatibilities have to be made explicit by casting. It turns out this was a very good decision. The only place where this wasn't implemented yet is when casting from a real number notation to float literals. I'm just pointing it out by suggesting to warn. And on top of that I suggest to warn only if the number is actually not losslessly castable to the underlying binary float (the idea is, that if the claim of many debaters in #5180 about programmers knowing binary float very well really holds, then they'll either use only losslessly representable literals or will want to know the places where they can not be). Note also, that there are not many places in existing code where this'll be needed (in many there are already casts anyway) - of course, libs dealing with inherently approximate computations (e.g. vsl) will need more casts, but even then it's still not much compared to other number casts in such libraries. C interop is not affected at all in any way. If you think it is, please provide some example.
For me it's kind of acceptable to have binary float literals by default. But then the warning in case of missing casting of irrepresentable number is a must. Note also that the warning can show the closest representable value and the programmer has the choice to use that lengthy number instead of casting if it fits the context better. The disadvantage here I'm worried about is that introducing decimal floats into V later will require new special syntax (because So, that's for the first proposal. Now let's assume the alternative proposal where IEEE 754 decimal floating point representation (not to be confused with big decimal) shall become the default for float literals (this would be my preference obviously 😉). What'll change in my answers to your points? Point 1 - no change in my answer. Point 2 - calling anything (e.g. C libs) from V is no issue (it's the same as now - sometimes explicit casting is needed, sometimes automated promotion is enough). I fully agree though, that calling V code from some obscure language (one which doesn't support decimal floats) might get more involved - but frankly imagining someone would call V code which havily uses decimal float literals and then would not want to use decimal floats but e.g. binary floats sounds like a niche use case (because in such a case the V code would already be written using binary floats or else the programmer calling V code would be doing something utterly wrong with the numbers in which case it's her problem, not V's). Point 3 - very subjective. So it's just your vote against my vote 😉. Though don't forget having decimal float literals by default would not need any warnings in any case and would not cause any troubles with future syntax for decimal floats. First I thought you're joking but then I realized you might be afraid of some practical problems which I'm not aware of. Please articulate them as I'd very much like to dissolve all existing fears. Btw. it's enormously difficult to explain this to children who just know real numbers intuitively (yes, I'm teaching programming to small children). And I think V is great language for education due to its simplicity and clarity (especially the compiler feedback is good IMHO). But this fundamental problem strikes again and again and again... Sure, education is not the primary goal for V, but it's the best "amplifier" of existing issues (children's minds are not yet blinded by "civilized thinking"). |
Just to demonstrate my point in practice - in the currently merged binary floating point simulation of pendulum upon magnets which is extremely heavy on binary floating point calculations and parameters etc. and can be considered the worst case, there are only 9 places overall where explicit casts would be needed (otherwise the warning about precision would be raised). In case the decimal-float-by-default would get accepted it'd be 28 casts (which is still nothing considering that nearly every single line deals with binary floats and that there are 8 other number casts already). What did we get in both cases? Safety and clarity. What did we lose? Nothing (the performance is identical because the generated binary is identical). |
If the generated binary is identical, why do we need the complexity of adding the casts? |
@JalonSolov safety and clarity (consistency). |
I still don't follow. If the binary code is identical, then the cast had no effect. It is neither more nor less safe with the cast than without. As for consistency... it would be consistent within V, but not consistent with other languages where the cast is not required. |
It this were true V wouldn't have the safety policy to cast/promote to the same type for operations (stuff like
I don't follow - of course it's about consistency within V. It was never different. It'd be a really funny joke to imagine someone saying that V is in any way consistent with other languages. |
Umm...
No casts required. Assigning to a variable instead of printing directly?
Still no casts required. |
Oh, sorry for not being clear - that example wasn't V code, but a shortcut for the idea behind. Also I wouldn't use And yay, C23 (implementations of proposals finished in 2021 and formally adopted in 2022 as planned) will natively support |
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
Due to V silently auto-converting decimal real number notation (e.g.
123.456
) to IEEE 754 binary floating point representation (which is not suitable for decimal numbers), it's not clear, that this conversion is in most cases lossy which leads to serious errors (so profound that for example it's one of the most frequent techniques in plausible deniability competitions).V should warn in compile time if the decimal number is not losslessly convertible to the underlying IEEE 754 floating point representation. The warning can be suppressed by explicitly casting the literal number to
f64
orf32
(e.g.f64(123.456)
).Note, no warning is necessary for hexadecimal notation.
Another alternative would be to treat decimal real number notation as what the user intended and thus convert them losslessly to IEEE 754 decimal floating point (all platforms V supports support it). This'd probably require addition of
d128
d64
d32
floating point formats to the language which was already proposed some time ago by some but got burried in other issues.Thinking about it, this second option sounds actually cleaner to me than the warning.
The text was updated successfully, but these errors were encountered: