New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Double serialization is inefficient #35
Comments
Originally, there was a reason that we decided to use @acfoltzer: as the original author of |
I think we wanted to be conservative about what a
|
I have a load of somewhat substantial changes to merge in, that will likely require a major number bump. I say that we use that as an opportunity to switch to using the IEEE754 module for |
👍 |
Second Adam's approval. On Tue, Jun 30, 2015 at 10:28 AM, Adam C. Foltzer notifications@github.com
|
Sorry this took so long, but I've finally pushed this change. I'm going to start merging in some larger changes before doing a major release. |
Currently, it takes 25 bytes to store a 64-bit
Double
.Right now, the current behavior is to use
GHC.Float.decodeFloat
, which a typical 64-bitDouble
into anInteger
(typically, 17 bytes at the relevant size) and anInt
(8 bytes) before serializing them. This leads to a 3.125x increase in size if you're storing, say, a large list or array ofDoubles
. For 32-bitFloat
, the footprint is 13 bytes, for a 3.25x increase.I'm not aware of the history behind decisions that were made. Is there a reason why
Double
andFloat
are stored (when theSerialize
instance for them is used) as an(Integer, Int)
pair rather than as raw binary? Is there a safety-related, corner-case reason for not having the default to the more efficient alternative?The text was updated successfully, but these errors were encountered: