New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
import MPFR_PREC_MAX from mpfr.h instead of hard coding it to the 32 bit limit #2567
Comments
comment:1
As far as I can tell, there is still a limit, namely the maximum precision MPFR_PREC_MAX, which is 2^24 on my 32-bit machine. I guess this means that we can relax the current 2^23 limit a little bit. However, the MPFR manual says: "Warning! MPFR needs to increase the precision internally, in order to provide accurate results (and in particular, correct rounding). Do not attempt to set the precision to any value near MPFR_PREC_MAX, otherwise MPFR will abort due to an assertion failure." |
comment:2
You could use the pi function in mpmath; as far as I know, it is limited only by available memory. I just verified that computing 100 million digits works on a 32-bit system. The last time someone compared, it was also about three times faster than MPFR (but probably less memory efficient). Or you could perhaps use this code which is even faster: http://gmplib.org/pi-with-gmp.html |
comment:3
The problem is not so much the computation of Pi itself, but that we limit the maximum amount of precision on can ask for when using MPFR. It seems the limit is good for 32 bits, but for 64 bits MPFR is also basically limited by memory only. The solution to this ticket might be to import MPFR_PREC_MAX from mpfr.h or wherever it is defined and then the problem will magically go away. Cheers, Michael |
comment:4
right. It is defined there as 231-1 on a 32-bit machine and 263-1 on a 64-bit machine. |
comment:5
I am curious. Can you give real timings? Here is what I get on sage.math:
This is without using the new FFT code we designed with Gaudry and Kruppa, which should give a |
comment:6
In mpmath on an Athlon 3700+ 2.21 GHz, 1 GB RAM, 10**6 digits took 5.96 seconds (4.77 calc, 1.19 convert) 10**7 digits took 109.45 seconds (82.16 calc, 27.28 convert) 10**8 digits took 2184.68 seconds (1634.65 calc, 550.02 convert) I can't compare with MPFR on the same computer at the moment, due to network problems. (With an old version of sage, 3.0.2, %time str(pi.n(10**6*log(10.,2))) takes 43.06 s, but I don't trust that number). This is the result reported by Ondrej a few months ago:
Mpmath relies directly on multiplication of GMP mpz's. If it is faster than MPFR, that is entirely due to using a better formula. Before using the Chudnovsky series, mpmath used AGM which has better theoretical complexity but was 3x slower up to at least 1M digits. |
comment:7
This discussion is very interesting, but it should not happen in trac, but on some mailing list since there people actually tend to see it. Trac's audience is rather limited and the comment section isn't meant for discussions :) I have changed the ticket to reflect Paul's pointer about pulling in MPFR_PREC_MAX from mpfr.h Cheers, Michael |
This comment has been minimized.
This comment has been minimized.
comment:8
yes in a previous version MPFR did use the Chudnovsky series, but it only gives a fixed number of terms per iteration, whereas the current AGM-based code doubles the accuracy at each iteration, thus is asymptotically better. Also when the division in GMP is really O(M(n)) the current MPFR code should be much faster. However we should use the Chudnovsky series for small precision and the AGM for large precision. |
comment:9
I guess this ticket is now obsolete. |
Author: Mike Hansen |
comment:10
I've attached a patch which removes our hard coded value. |
Reviewer: Paul Zimmermann |
comment:12
Oops, I knew that I had seen that ticket in the title a few days ago, but wasn't able to find it again :-) I'll rebase this one. The warning in the comment seems not to apply now. |
comment:13
Mike,
which warning do you mean? |
comment:14
Sorry, I knew after posting that I should have been more specific. There's a comment in the source code that said that things totally break if we use the value from mpfr.h. |
comment:15
I've rebased this and posted a new patch. This needs to be tested on a 32-bit machine. |
comment:16
while trying to review that ticket, I get with the input in the description:
Is that normal? Paul |
comment:33
Paul, does MPFR_PREC_MAX now reflect the |
comment:34
in 3.1.x MPFR_PREC_MAX is 263-1 on 64-bit computers. In the development version of MPFR it is set to 231-257 in 32-bit and 263-257 in 64-bit mode. Paul |
comment:35
any progress on this ticket? Paul |
Branch: u/chapoton/2567 |
Commit: |
New commits:
|
comment:42
Well, I made a branch and this seems to work. Anybody interested ? |
Changed reviewer from Paul Zimmermann to Paul Zimmermann, François Bissey |
comment:43
Well, it looks good to me. I cannot see clear objections in the earlier ticket comments so let's see what happens when we include it for real. The author field may need updating though. |
comment:44
voilà, voilà. |
Changed author from Mike Hansen to Mike Hansen, Frédéric Chapoton |
comment:45
Comme une lettre à la poste :) |
Changed branch from u/chapoton/2567 to |
The discussion below is besides the point. The main issue is that we define MPFR_PREC_MAX in Sage's sources instead of pulling it in from mpfr.h. We do hard code the 32 bit value, so on 64 bit boxen we limit the user to much lower precision than is actually technically feasible as pointed out below.
Cheers,
Michael
CC: @zimmermann6 @kiwifb
Component: basic arithmetic
Author: Mike Hansen, Frédéric Chapoton
Branch/Commit:
e49e83e
Reviewer: Paul Zimmermann, François Bissey
Issue created by migration from https://trac.sagemath.org/ticket/2567
The text was updated successfully, but these errors were encountered: