bpo-26256: Document algorithm speed for the Decimal module.#4808
bpo-26256: Document algorithm speed for the Decimal module.#4808skrah merged 4 commits intopython:masterfrom
Conversation
|
|
||
| Q. Is the CPython implementation fast for large numbers? | ||
|
|
||
| A. Yes. A change was added to the Cpython specific implementation that uses |
|
|
||
| Q. Is the CPython implementation fast for large numbers? | ||
|
|
||
| A. Yes. In the CPython and PyPy3 implementations, the C version of |
There was a problem hiding this comment.
Thanks, that looks great! Small nitpick: C/CFFI versions. PyPy uses CFFI, I think that is a little clearer.
There was a problem hiding this comment.
Thank Stefan! I've added that change.
|
@skrah: Please replace |
|
Ok, thanks! |
|
GH-11736 is a backport of this pull request to the 3.7 branch. |
|
I'm sorry but I don't understand the added FAQ entry. Does it mean that decimal is slow by default unless I enable unlimited precision? On my Fedora 29, this script displays "Peak memory usage: 864.0 kB". But if I replace "if 0:" with "if 1:", the script fails immediately with: |
|
On Tue, Feb 12, 2019 at 06:17:15AM -0800, Victor Stinner wrote:
Does it mean that decimal is slow by default unless I enable **unlimited** precision?
No, but I can see why you get the impression from the new FAQ entry. I hoped
people would infer the intended meaning.
"to realize this performance gain" here means: If you don't set for
unrounded precision, you'll simply get a result that's rounded to,
say, 28 digits instead of the desired unrounded integer.
It could be replaced by "in order to get an unrounded exact integer result".
But if I replace "if 0:" with "if 1:", the script fails immediately with:
```
Traceback (most recent call last):
File "dec.py", line 9, in <module>
num = decimal.Decimal(1) / decimal.Decimal(7)
MemoryError
Yes, because you don't have enough memory for decimal.MAX_PREC digits.
|
I understand: if you don't use "ctx.prec = decimal.MAX_PREC", decimal will be very slow.
I'm confused. The FAQ entry is about performance. I don't understand the relationship between performance and correctness. |
|
On Tue, Feb 12, 2019 at 06:48:57AM -0800, Victor Stinner wrote:
> However, to realize this performance gain, the context needs to be set for unrounded calculations.
I understand: if you don't use "ctx.prec = decimal.MAX_PREC", decimal will be very slow.
> It could be replaced by "in order to get an unrounded exact integer result".
I'm confused. The FAQ entry is about performance. I don't understand the relationship between performance and correctness.
Then why did you approve the original entry? This is just wasting time.
|
Sorry, I don't recall the history of this PR created longer than 1 year ago. It seems like the PR evolved after I approved it, and I asked you explicitly to review you since I don't know well decimal internals. I approved the idea of documenting decimal performance, as discussed in https://bugs.python.org/issue26256
I don't think that it's useful to be rude here. |
Maybe the FAQ entry can be rephrased to avoid confusion. But since I still don't understand very well the subtle details, I cannot suggest a change. |
https://bugs.python.org/issue26256