-
-
Notifications
You must be signed in to change notification settings - Fork 33.2k
Description
Bug report
Bug description:
According to the docs for 3.11.11 (and up)
Return the canonical encoding of the argument. Currently, the encoding of a Decimal instance is always canonical, so this operation returns its argument unchanged.
However this doesn't seem to be the case:
for example in ["10",
"10.2",
"-3.2e2", # missing the +
"3.2e1", # possibly redundant e-notation
"3.2e2", #
"-9.2e-1", # -ve exponent
"-9.2e-6", # -ve exponent
"-9.23e-6", # -ve exponent
]:
print("{:>10} : {:>10}".format(example, decimal.Decimal(example).canonical()))
# 10 : 10
# 10.2 : 10.2
# -3.2e2 : -3.2E+2 # understandable token change to E+
# 3.2e1 : 32 # unexpected
# 3.2e2 : 3.2E+2 # as per argument, but what's special about e+2 and up?
# -9.2e-1 : -0.92 # really unexpected
# -9.2e-6 : -0.0000092 # really unexpected
# -9.2e-7 : -9.2E-7 # unsure why -7 and greater flips behaviour
The token change sort of makes sense but the other cases seems somewhat at odds with the docstring
Context
I'm doing some work in a 3rd party JSON dump where most of the numbers are in scientific notation. I need to read in the file, modify some non-numerical strings, and then dump everything back out. I know the values are numerically equivalent but downstream processes are doing checks to ensure that the only changes in the JSON are literally the expected changes, which are text-string only modifications.
I was writing up checks and was wondering why my re-encoder was throwing a fit, and saw this in my test suite.
CPython versions tested on:
3.11
Operating systems tested on:
Windows