Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement proper print_shortest for Float16 #5959

Closed
ivarne opened this issue Feb 25, 2014 · 6 comments · Fixed by #7291
Closed

Implement proper print_shortest for Float16 #5959

ivarne opened this issue Feb 25, 2014 · 6 comments · Fixed by #7291
Assignees
Labels
status:help wanted Indicates that a maintainer wants help on an issue or pull request

Comments

@ivarne
Copy link
Sponsor Member

ivarne commented Feb 25, 2014

After #5948 Float16 prints 5 significant digits, to ensure that every printed Float16 value parses back to the same value. This fix is not in line with how we print Float32 and Float64 and the documentation for print_shortest, and I think this would make a nice "up-for-graps" issue.

julia> float16(0.1)
float16(0.099976)

julia> float32(0.1)
0.1f0

julia> float64(0.1)
0.1

There might be a reason to explicitly show the horrible precision for Float16 by default, but print_shortest should give the documented answer.

@StefanKarpinski
Copy link
Sponsor Member

This is not so simple: the shortest number of digits to reconstruct a value of a floating-point type depends not only on the value but also the type to be reconstructed. We could easily print the shortest number of digits that would be required to reproduce the same value as a Float32, but it might have well more than five digits even though you never need more than five digits to reconstruct a particular Float16. There are only two reasonable solutions that I can think of:

  1. patch the double-conversion library, adding support for 16-bit floats.
  2. implement the grisu algorithm in pure Julia, including support for 16-bit floats.

I kind of favor the latter as a project. It would be nice to have support for optimal printing in arbitrary bases and arbitrary precisions.

@quinnj quinnj self-assigned this Jun 9, 2014
@quinnj
Copy link
Member

quinnj commented Jun 9, 2014

I'm going to assign myself here. I've meandered through the double-conversion library before and even started a port at one point. I think it'd be great to have a more Julian implementation of the algorithm

@quinnj
Copy link
Member

quinnj commented Jun 9, 2014

Just curious, are there any functions like float(significand,exponent) --> Float64? I can't seem to find anything around and would rather avoid having to roll my own if there's something already done.

@JeffBezanson
Copy link
Sponsor Member

ldexp might work for you.

@quinnj
Copy link
Member

quinnj commented Jun 14, 2014

Alright, made some progress on this. Feel free to check out https://github.com/quinnj/Grisu.jl. It's currently passing the couple hundred thousand tests for the individual fastfixeddtoa, fastdtoa, and bignumdtoa, though I still need to do the more general dtoa tests (the alias of the Base.Grisu.grisu function). I haven't benchmarked almost at all, but a quick look shows it's 2x the C++ implementation, so I'm feeling pretty good about that having not even tried yet! It's also a nice code reduction going from ~6K lines of code to about ~1K.
I haven't added Float16 support yet, though it would be pretty trivial (2-3 methods), as everything is stored in a hand-rolled internal Float type. It also shouldn't be too bad if we added a Float80 or Float128 type in the future, though I may have to tweak the internal Float type to handle the greater precision (currently I think it maxes at Float80).

@StefanKarpinski
Copy link
Sponsor Member

Sweet! That sounds great. I'll have to check it out tomorrow.

@quinnj quinnj mentioned this issue Jun 17, 2014
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
status:help wanted Indicates that a maintainer wants help on an issue or pull request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants