Yes, it's due to precision loss when converting from double to float. But it's actually worse than that, there can be not only precision loss, but annoying "precision gain" when serializing and deserializing floats to strings and back, i.e. we don't want to save the font of size 9.1 and restore it as size 9.0999993 or something like that, as it will look ugly if the program shows the font size anywhere in the UI.
The sad thing is that I'm perfectly aware of this problem, but somehow failed to think about it when writing this code. The solution is to use scaling and serializing integers only, i.e. save 9.1 as 9100 (because nobody should need more than 3 digits of font size precision, right?). Unfortunately this means changing the version again and I'd like to avoid it, so I'm thinking about saving 9.1 as "9.100", i.e. inserting the dot artificially into what is basically int(size*1000). This should allow making this work correctly without the version change, AFAICS, so now I just need to find the time to do it...
annoying "precision gain" when serializing and deserializing floats to strings and back, i.e. we don't want to save the font of size 9.1 and restore it as size 9.0999993 or something like that, as it will look ugly if the program shows the font size anywhere in the UI.
Maybe this can be solved by enforcing specific number of significant digits with ".03f" or similar? The comparison can be done with two (converted) strings using the same format instead of two numbers, which would solve the precision issue. This method is probably used infrequently enough to create performance issues, but I'm not sure what else may be affected.
Finally I think that perhaps we don't need to do anything complicated here. The max difference between the double corresponding to the string representation of a float and the original float is of the order of 1e-38, i.e. not really significant. So I think the simple approach of this PR should be enough.
Does anybody have any problems with doing it like this?
Allow parsing all fractional sizes in wxFont descriptions
Remove the check that the size representation was the same as float and
as double, which was supposed to catch various edge cases (NaNs, huge
numbers etc) but actually caught plenty of perfectly valid font sizes
such as 13.8 that simply lost precision when converting from double to
Just check that the size is positive and less than FLT_MAX to avoid
using values that really don't make sense as font sizes.
Also add a unit test checking that using fractional font sizes in
description string works as expected.