-
Notifications
You must be signed in to change notification settings - Fork 292
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
(dev2.0) strange behaviour of printf "%x" #1530
Comments
I get the following in FSI:
Above it says that the expectation is:
Note that the second Is there a reason that it wouldn't be the same as FSI? The actual I get in REPL2 is:
|
Thanks Dave, The %d does not worry me so much. It could be interpreted as %d always printing signed and doing an unsigned to signed coercion when given an unsigned parameter - printf does do such things. OTOH you are right the %d printf spec for F# says: Formats any basic integer type formatted as a decimal integer, signed if the basic integer type is signed. However the I'll edit the OP to note the unsigned %d error. |
I'm guessing someone needs to find the implementation of .NET's "x" format and implement it in Java Script, Not sure if this is in mscorlib or inside the CLR though. The current fable implementation is here: Fable/src/js/fable-core/String.ts Lines 123 to 127 in 44ed21c
|
The %d problem is more difficult to fix - I think it requires The %x problem could be fixed for %x on its own quickly by changing the hex converter function A complete solution would be much more complex because would have to implement all the width characters for %x and %d (the two could however be done uniformly). Personally I find things like Do you think a partial solution, with common width characters implemented, would be useful? |
The About |
The code now seems much more difficult to break than 1.37 was! But I don't understand quite how you are internally implementing different widths and signedness of numbers. One inconsistency with dotnet F# is that conversion of negative numbers from signed to unsigned results in 0, whereas it should result in a large unsigned number: (-1 |> int16 |> uint16) = 16383. However I suspect this may be difficult to track properly, and other than this things seem pretty good. Suggestion for toHex this is always supposed to be unsigned, so I suggest function toHex(value: number) {
value.toString(16)
} That would I think always be better than the current version. |
Number conversions that were mainly implemented by @ncave generally work.
Which is not right yet (the first |
I suppose it comes down to the question, do we want I can see both sides... Using JS version If I write and test in F#, I get different results at runtime. Using .NET version Lots of effort to port chunks of the BCL to JavaScript. I guess it could be one of the caveats mentioned in the Docs? |
In general, if it's easy to match the F#/.NET semantics we try to do it, but if it takes too much work or makes fable-core size explode we usually settle with a good approximation. String formatting is not matching the F#/.NET specs in full so yes, we can add this caveat to the docs. Besides that, I'm open to modify |
Given the current implementation doesn't match either native JS or .NET implementation, it is its own thing, and reverting to native JS is easy enough, I'd suggest we do that and update the docs. I can submit a PR tonight if no one else does before then. |
That'd be awesome @xdaDaveShaw, thank you! Could you please also update the tests accordingly? (StringTests.fs) |
From my POV we should at least have something compatible with bitwise operations, so that a 32 bit two's complement number will display correctly as hex digits. That is what NET does and it is just common sense. The code for this is very easy for an unsigned 32 bit number:
In .NET signed I'd agree with Dave's solution, simply because beyond 32 bits there is not a nice way to solve this problem in general, we do not know whether the number is supposed to be int32 or int64 and hence how many leading '1's - and for So if JS native is a - sign (for negative numbers) followed by the hex representation of the abs value then that will be usable and also not confusing. We need a caveat in the docs: (1) that I'll update the documentation with a full description of the consequences of using JS FP numbers, and how conversions do/don't work (e.g. the fact that signed to unsigned of larger size zeros negative numbers). Which branch of the repo should I patch? |
Re the documentation, this definitely needs updating since I still do not quite understand what is what: All numeric types including decimal become JS number (64-bit floating type), except for int64, uint64 and bigint. From: That makes sense, since Should these necessary details be documented under compatibility.md, or in a link from compatibility.md, or somewhere else? PS from my tests it certainly looks as though int64, uint64 are encoded in number since i can see the prevision loss for larger numbers. |
Thanks a lot for offering your help to update the documentation @tomcl. If you do it it's probably better to use the Actually import Long, { toString as longToString } from "./Long";
function toHex(x) {
return x instanceof Long ? longToString(x, 16) : (x >>> 0).toString(16);
} |
OK - great! So in fact .Net compatibility is pretty good - the 64/32 bit thing can be determined dynamically from types provided. When I've got straight in my mind what is supposed to be, I'll check whether I can find any corner cases with semantics different from .NET. I think having working hex printout will make that a lot easier! I'd dearly like an automatic testbench that runs identical code fragments via FABLE and .NET comparing results... i'm very impressed with fable 2.0 - it looks like it has cleaner semantics that Fable 1 as well as being faster. I noted when porting 5K lines of code from 1.37 that a whole load of jsinterop stuff with dynamic types interfacing to browser/electron needed minor rewriting to pass compiler, not in a bad way, but maybe systematic enough to give a guide. Is this what everyone finds or just me? |
Couldn't fable tell us by emitting a warning somehow when using such APIs? |
OK, so here are the expected (from fsi) .NET numeric conversions, and what fable2.0 does.
Basically, in .NET, -1 is preserved to and from numeric types, although it looks like 2^N-1 in an unsigned type of length N. this is pretty clean. Fable2 gives answers that depend on BOTH source and result type. Notable from repl2 is: uint64 -> int64 which seems to go via uint.
conversions from char to int32 or uint32 go wrong in a not easy to understand way:
Any conversion between int64 or uint64 and some other integer is not JS standard, since 64 bit types are custom, so we should do these like .NET I think? char does not seem to exist in JS. Characters can be converted to unicode codes which I think are normally 16 bit unsigned. So the FS conversions to char should result in positive Number values. Maybe -1 is converted to char as an all ones bit pattern which gets converted to Number as +/- 2^N-1 rather than -1 as it should be? What I don't understand in Fable is when are int32/uint32 held as Number, and when are they held as 32 bits (as they will be converted after any bitwise op)? |
Back to @alfonsogarciacaro proposal re %x. I agree that it makes sense and is understandable if: int32,uint32 is done via JS int64,uint64 is done as in the already written in
That is very simple, and also would make it easier to work out what is going on with all the other conversions. :) |
I'm beginning to work on some tests for %x and will open a PR soon. For the Docs around the numeric conversions @tomcl is talking about, would it make sense to open a separate issue? |
good idea, done that #1532 |
Just a quick update to let you know that |
@matthid We already tried to do that in most cases though it's tricky to find a right balance (e.g. it may not make sense to issue a warning every time you use a decimal to say it's being compiled to JS number so it may lose precision in some situations, although we do that when you explicitly convert a float into decimal). We're open to PRs for the places you think this can be improved :) |
I made some progress last night, but noticed some other issues with Hex Formatting ( |
@xdaDaveShaw Please check this comment for an example on how to make formatting ( |
I saw that and kept it in mind, however, I had yet to write a failing test that needed I'll submit the PR tonight and we can see where it's at. |
Let's close this as it should be fixed by @xdaDaveShaw #1535 PR. Let's deal with the conversion issues in #1532. Thanks a lot you all for your help! |
Description
printf "%x" sometimes prints 32 it ints wrong
Repro code
Expected and actual results
expected:
%x
is0x80000000
for both signed and unsignedactual:
%x
prints asff-7f000000
both signed and unsigned (wrong).%d
is-2147483648
both signed and unsigned (correct).EDIT: %d for unsigned should be 2147483648 according to the spec, so this is also unexpected!
Related information
dotnet fable --version
): 2.00-beta-001NB - 2.00 is significantly better than 1.37 on this test, since the decimal printed values are correct now, and the signed and unsigned hex values are the same. However, they are still not quite right!
The text was updated successfully, but these errors were encountered: