-
Notifications
You must be signed in to change notification settings - Fork 236
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
inaccurate mathematical constants #17
Comments
There is nothing wrong. Unfortunately we can't use hex floating point format because some silly compilers don't accept it. The closest single precision representation of PI is 3.1415927410125732421875f AKA 0x1.921fb6p+1f The closest double precision representation of PI is 3.141592653589793115997963468544185161590576171875 AKA 0x1.921fb54442d18p+1 |
@b-sumner Your explanation is good culture information that would worth to be added as a comment to the header. |
It's still an odd way of writing it.
Observe that in no case do decimal digits beyond the 9th affect binary digits beyond the 24th, the size of the single-precision mantissa. The same could be said of decimal digits beyond the 17th and binary digits beyond the 53rd, in the case of double-precision values. Specifying more than that has no effect, and just confuses the issue. |
Well, If you want to write single precision 0x1.000002p+0 accurately in decimal, you have to write it like this: 1.00000011920928955078125. There may be other decimal strings that result in the same value, but that one results in exact same double precision and extended precision value. |
The two attached small programs demonstrate that decimal digits beyond the ninth and seventeenth, respectively, have no effect whatsoever on the binary representation of single- and double-precision floating-point numbers. The correct decimal expansion, rounded to nine or seventeen digits, produces a result just as accurate as any other possible specification. You could add digits randomly, and it would have exactly the same result, nil. This being so, specifying more digits serves no useful purpose; it is merely pointless and misleading.
|
The problem is that those decimal numbers you're writing are not exactly binary floating point numbers. Don't you think that the values specified in the headers should be exactly floating point numbers? |
I don't understand the significance of "exactly binary floating-point numbers". From the two programs I provided:
or
Those are the correct decimal expansions of Pi, rounded to nine and seventeen digits. Those two programs demonstrate (try them with any compiler you like) that those definitions produce EXACTLY THE SAME BINARY VALUE, bit-for-bit, as any longer sequence. As I said before, after the ninth or seventeenth place, you could add COMPLETELY RANDOM DIGITS and get exactly the same result. It's just misleading to specify more, as that implies that it has some effect on the floating-point value, whereas it actually just doesn't, as proven above. |
I understand your point. But at least this kind of joke for computer scientists and nerds should be explained carefully in the headers... :-) |
I suggest we keep the values as they are but that we explain in some comments the cleverness of the approach. But on the other hand, even if parse time and storage cost for these exact values is negligible nowadays, it seems that it breaks some compilers with some limitations #19 |
IMHO, adding incorrect digits (for the constant) to match some binary representation is the wrong way to do it. There is nothing clever about it; it is just as inexact as using the correct constants (they are equally inexact). Just specify enough (correct ) decimal places of the constant to get the best binary representation. When I saw pi out to many place but with incorrect digits, the pedant in me had a fit! |
I think if you are going to have a floating point number in a header, then you should have a floating point number, not a string of digits, etc. that is hopefully rounded to the desired floating point number. This wouldn't be a problem if all compilers supported hex floating point literal syntax, which permits us to write floating point numbers efficiently. Unfortunately, due to the limitations of certain compilers, we are stuck using other less useful syntax. |
I think there are two ways of looking at this:
But I think the best approach might be, since this is not brand-new technology, ie mathematical constants:
|
Since floating point precision is not part of the C standard, it's not good practice to make assumptions on the underlying floating point format, and to write an incorrect decimal expansion that will fit. What I have usually seen is rather to give enough correct decimal digits, and let the compiler approximate correctly when it reads the digits. |
quoting from the C99 standard, Annex F, IEC 60559 floating-point arithmetic, page 444:
http://www.open-std.org/jtc1/sc22/WG14/www/docs/n1256.pdf However, it's obviously the OpenCL specification that actually matters:
https://www.khronos.org/registry/OpenCL/specs/opencl-2.2-cplusplus.html#builtin-scalar-data-types As I pointed out before:
https://en.wikipedia.org/wiki/Single-precision_floating-point_format The ratio between the number of significant binary digits and the number of significant decimal digits is the ratio of the logarithms of 2 and 10 :
There is nothing indeterminate about this. It is a mathematical fact. Since the OpenCL specification requires IEEE 754 single- and double-precision floating point, defining constants with more than 9 and 17 decimal digits, respectively, is completely useless. It just confuses the issue. I don't think any other considerations are at all relevant. |
The interesting part about this annex is the note 356 (p. 507 in the C11 standard): "Implementations that do not define _ STDC_IEC_559 _ are not required to conform to these And of course, not all compilers follow completely the latest standard anyway. Otherwise, one would have preferred hex floats in the first place... I agree that the OpenCL specification matters most. However, I maintain that it's not good practice to do that: code tend to be copy-pasted, and this approach is error-prone. It obfuscates the code for no reason. |
Discussed at length in GitHub issue #17.
I've opened pull request #22 which I believe implements the change that has been proposed here, to give us something concrete to review. At this stage I believe @b-sumner is the only vocal opponent for this change. I'd appreciate if anyone in favour of the change could quickly run their eye over the PR to make sure it accurately captures what has been proposed (as well as making sure I haven't screwed up any important mathematical constants). |
PR #22 is now merged - closing issue. |
<cl_platform.h> defines some standard mathematical constants, in both single- and double-precision formats:
Bizarrely, the single- and double-precision versions DO NOT MATCH BEYOND THE SEVENTH DIGIT:
THEY CAN'T BOTH BE CORRECT!
The situation may not be as bad as it looks -- the double-precision values may be entirely correct, and since eight decimal digits would be sufficient to represent the 24-bit mantissa of the single-precision values, their inaccuracy may be very small. But it looks quite odd, and is likely to cause problems if these definitions are ever copied and reused somewhere else.
Surely this should be investigated?
The text was updated successfully, but these errors were encountered: