Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Helper functions for mpmath #33

Closed
GoogleCodeExporter opened this issue Mar 15, 2015 · 23 comments
Closed

Helper functions for mpmath #33

GoogleCodeExporter opened this issue Mar 15, 2015 · 23 comments

Comments

@GoogleCodeExporter
Copy link

I would like the following functions for scaled integer (i.e. floating
point implementation) operations in gmpy:

man, exp = trim(man, exp, prec=0, rounding='f')
man, exp = add(xman, xexp, yman, yexp, prec=0, rounding='f')
man, exp = div(xman, xexp, yman, yexp, prec, rounding='f')
man, exp = sqrt(man, exp, prec, rounding='f')

These are essentially the same as the internal mpmath functions normalize
(or from_man_exp), mpf_add, mpf_div, mpf_sqrt. However, they don't require
signs as separate components, they don't require handling of infinities,
and they don't require data to be packaged in a specific way (numbers are
passed as direct arguments).

As such I think these functions would be useful even for non-mpmath uses
and could be exposed publicly. Particularly so for trim (which can be used
to round an integer to a given number of bits, or factor an integer into
its odd and power-of-two parts); and add can be used to compute x+y*2^n
without creating a temporary object.

These functions, with the signatures written as above, would be a little
less efficient for mpmath than the functions discussed in issue 22. They
would, however, be a much cleaner fit for gmpy (less dependent on mpmath
specifics). Further (and the reason I'm suggesting this), I'm experimenting
with a different representation in mpmath that should make the functions
with signatures exactly as above just as efficient, and possibly make
mpmath with the gmpy backend even a little bit faster.

Original issue reported on code.google.com by fredrik....@gmail.com on 28 Aug 2009 at 12:20

@GoogleCodeExporter
Copy link
Author

I will work on adding these functions. Just to confirm the behavior, add(xman, 
xexp, 
yman, yexp, prec, rounding) should return (xman*2^xexp + yman*2^yexp) rounded 
to 
prec digits with the specified rounding mode.

Should I also include mult?

Original comment by casevh on 31 Aug 2009 at 10:56

  • Changed state: Accepted
  • Added labels: Type-Enhancement
  • Removed labels: Type-Defect

@GoogleCodeExporter
Copy link
Author

Correct. I think mult is unnecessary because it can be written trim(xman*yman,
xexp+yexp) without loss of either speed or clarity (but benchmarking could 
prove me
wrong).

Original comment by fredrik....@gmail.com on 1 Sep 2009 at 4:22

@GoogleCodeExporter
Copy link
Author

Can I make any assumptions about the maximum size of an exponent or should I 
allow
them to be unbounded?

Should I raise a ZeroDivisionError when dividing by 0?

Should I raise a ValueError when taking the square root of a negative number?




Original comment by casevh on 2 Sep 2009 at 4:50

@GoogleCodeExporter
Copy link
Author

Should I require a value for prec?

Original comment by casevh on 2 Sep 2009 at 4:53

@GoogleCodeExporter
Copy link
Author

> Can I make any assumptions about the maximum size of an exponent or should I 
allow
them to be unbounded?

Mpmath needs them to be unbounded (Python int and long).

> Should I raise a ZeroDivisionError when dividing by 0?
> Should I raise a ValueError when taking the square root of a negative number?

Yes and yes.

> Should I require a value for prec?

Mpmath requires that prec=0 is supported and perform an exact operation for 
trim and
add. (The way to specify this could be modified.)

Original comment by fredrik....@gmail.com on 2 Sep 2009 at 7:44

@GoogleCodeExporter
Copy link
Author

I have a few more questions. 

Can prec be assumed to be a C long? (That constraint is present in the existing 
_mpmath_normalize.)

Is there default rounding mode that is most frequently used?

Does the requested prec value change frequently? 

I'm trying different approaches to parsing the arguments in an attempt to 
minimize 
calling overhead. If I can decrease the number of objects that are passed, I 
can 
decrease the overhead. BTW, the overhead for calling the prototype for trim 
(just 
parsing and returning values) with Python 3.1 is significantly less than for 
Python 
2.6: ~200 uSec vs. ~250 uSec.


Original comment by casevh on 4 Sep 2009 at 7:28

@GoogleCodeExporter
Copy link
Author

> Can prec be assumed to be a C long?

Yes, definitely.

> Is there default rounding mode that is most frequently used?

I wrote 'f' above but I think it should probably be 'd'. While 'n' is probably 
most
common, it will always be passed explicitly, at least the way the code is 
written
right now. This might be possible to change.


Original comment by fredrik....@gmail.com on 4 Sep 2009 at 7:45

@GoogleCodeExporter
Copy link
Author

> Does the requested prec value change frequently? 

Yes; even when the user-set working precision is fixed, many functions change 
the
temporary precision by all kinds of amounts.

Original comment by fredrik....@gmail.com on 4 Sep 2009 at 7:47

@GoogleCodeExporter
Copy link
Author

I've added _mpmath_trim and _mpmath_mult to svn. Using _mpmath_mult(...) is 
faster
than _mpmath_trim(xman*yman, xexp+yexp) so I'll leave it in. I'm working on
_mpmath_add next.


Original comment by casevh on 6 Sep 2009 at 5:05

@GoogleCodeExporter
Copy link
Author

Nice, thanks! The trim function recovers the performance lost to having to wrap
_mpmath_normalize. There is a 'fprintf(stderr, "zero\n");' left in there though 
which
I had to comment out.

I don't think _mpmath_mult will have any effect. In the route through 
mpf.__mul__,
the difference is dwarfed by overheads. Besides trim, the helper function for
addition is probably the most important one.

Obtaining a much more substantial speedup will involve implementing the number 
type
for mpmath entirely in C or Cython. Overall, the changes I'm experimenting with 
right
now are intended to facilitate such a switch by substantially reducing the 
amount of
interfacing between code at different levels of abstraction. In the short term, 
the
only significant speedup (even with the helper functions) will be for complex 
arithmetic.

Original comment by fredrik....@gmail.com on 6 Sep 2009 at 2:53

@GoogleCodeExporter
Copy link
Author

The trim function causes a segfault somewhere... I'll try to hunt it down.

Original comment by fredrik....@gmail.com on 6 Sep 2009 at 3:26

@GoogleCodeExporter
Copy link
Author

Ah, the answer is simple: it returns an mpz as exponent but the code in mpmath
expects it to be int or long.

Original comment by fredrik....@gmail.com on 6 Sep 2009 at 5:53

@GoogleCodeExporter
Copy link
Author

Although the exponent type should be fixed, the direct cause of the segfault is
comparison of an mpz with a Python float infinity:

mpz(3) > 1e1000

Original comment by fredrik....@gmail.com on 6 Sep 2009 at 6:01

@GoogleCodeExporter
Copy link
Author

Thanks for finding the mpz(3) > 1000 bug. I'll get that fixed today.

I'll also convert the exponent back to a Python integer.

After I get gmpy 1.10 released, I want to replace the mpf layer with something 
better
(i.e. correctly rounded and basic transcendental functions). One option is MPFR 
but
that introduces additional dependencies and possible licensing issues. I'd 
prefer to
provide better support to mpmath.

Original comment by casevh on 6 Sep 2009 at 6:27

@GoogleCodeExporter
Copy link
Author

With the upcoming changes in place, it should be feasible for mpmath to 
directly use
a gmpy-based floating-point type, assuming that it implements compatible 
functionality.

Certainly the biggest source of friction is mpmath's use of a global (or rather,
bound to a context object), rather than instance-controlled, precision. What 
would
you think of using something similar in gmpy? One alternative that could work 
is to
bind the precision to the mpf type. Then gmpy.mpf(3.14, 100) would be 
equivalent to
gmpy._create_mpf_type(prec=100)(3.14). If the precision of the type objects were
mutable, mpmath could subclass gmpy.mpf and modify the precision of that type
in-place when changing precision.

Also, mpmath obviously needs both real and complex numbers, so gmpy mpfs should
ideally support complex values natively. Ditto for infinities.

Original comment by fredrik....@gmail.com on 7 Sep 2009 at 10:38

@GoogleCodeExporter
Copy link
Author

A question on the mpf_add implementation in libmpf.py: What is purpose of the 
"offset
> 100" portion of the line "if offset > 100 and prec:" ? (My best guess is that 
it
keeps values close together in the fastest code path or am I missing something?)

I'm still thinking about how best to control precision.

Original comment by casevh on 10 Sep 2009 at 6:13

@GoogleCodeExporter
Copy link
Author

Yes; it assumes that shifting a couple of extra bits is cheaper than comparing 
the
exact sizes. This is not necessarily true in C. (Though maybe comparing limb 
sizes
first can save a little bit of time.)

For reference, I'm attaching the naive Cython implementation I wrote for Sage 
(not
finished). It uses mpz for exponents too (as I think is just as fast or faster 
in a
pure C(ython) implementation. I'm not sure if it's correct; I haven't tested it
extensively.

Original comment by fredrik....@gmail.com on 10 Sep 2009 at 9:39

Attachments:

@GoogleCodeExporter
Copy link
Author

I've committed _mpmath_add and fixed a bug in _mpmath_trim. I've done limited
testing. The functions still return the exponent as an mpz. I will get that 
fixed.

Original comment by casevh on 16 Sep 2009 at 4:54

@GoogleCodeExporter
Copy link
Author

I think the versions of _mpmath_add, _mpmath_div, and _mpmath_mult are correct. 
They
agree with libmpf over a wide range of arguments. Can you check if there is a
significant performance advantage, especially for _mpmath_add?

I still need to change _mpmath_trim to return a Python integer for the exponent 
and
then I'll do _mpmath_sqrt.

Original comment by casevh on 12 Oct 2009 at 7:23

@GoogleCodeExporter
Copy link
Author

Hmm, while there is a performance advantage, I think now that it would be 
better to
not bother changing the low-level function interface in mpmath (although we 
could
still use these implementations to speed them up slightly), and instead aim for
implementing mpf and mpc base classes in gmpy.

Do you think this sounds realistic? We'd need a context class (basically just 
holding
the precision), mpf and mpc classes implementing basic arithmetic. They'd 
basically
need to be subclassable, and then coercions and everything else messy could 
fall back
to Python code. (Though coercions mpf <-> mpc and with int/long and 
float/complex
should be implemented directly in C for speed.)

(Side note: with classes done in C, it's probably better to just use mpz for 
both
mantissa and exponent.)

The mp4 branch in mpmath trunk implements two separate backends -- one based on 
the
proposed new format, and one based on Python floats / complexes. I should fix 
it so
it also supports an old-format backend. That code should provide a basis for
implementing a gmpy-based backend.

Unfortunately, I'm a little preoccupied at the moment (mostly due to school)... 
I'll
see when I get time to work more on this. I greatly appreciate your work so far.

BTW, I've been experimenting some with fixed-point implementations of 
transcendental
functions on top of GMP/MPIR and it seems that it's possible to beat MPFR by a 
factor
2-4 for elementary functions at low precisions. There will of course be a little
additional overhead when adding rounding and floating-point conversions.

I might set up a separate project for this. If it goes anywhere, gmpy could 
include
it (it'd be vanilla C depending only on GMP/MPIR, and hopefully relatively 
small) and
use it as a base library for floating-point transcendental functions.

Original comment by fredrik....@gmail.com on 12 Oct 2009 at 3:01

@GoogleCodeExporter
Copy link
Author

I'm still trying determine what I want to do next (besides getting 1.10 
officially
released.) I've had a couple of requests to support MPFR. I also want to support
mpmath. But I don't know how much time I'll have. 

My primary goal for 1.20 is to replace the GMP/MPIR mpf layer with a better 
behaved
floating point type. These routines are a good start and I'll look at using 
your new
format. I want to maintain compatibility with prior gmpy releases if I can.

If I do support MPFR, I'll probably call it gmpy2 so I can break backwards
compatibility. I've even thought of making it Python 3.x only. But I am worried 
about
increasing the dependencies and I don't know if license for MPFR will change to 
LGPL
3 in the future.

I like the idea of your fixed-point library. What would be the maximum practical
precision?

Original comment by casevh on 13 Oct 2009 at 5:16

@GoogleCodeExporter
Copy link
Author

> But I am worried about increasing the dependencies and I don't know if 
license for
MPFR will change to LGPL 3 in the future.

They already switched to LGPL 3 for the next release, afaik.

> I like the idea of your fixed-point library. What would be the maximum 
practical
precision?

Should work fine for any precision as long as shift offsets fit in ints, like 
the
current mpmath fixed-point code. Though I'd probably concentrate on low 
precision
optimization first.

Original comment by fredrik....@gmail.com on 13 Oct 2009 at 9:37

@GoogleCodeExporter
Copy link
Author

_mpmath_trim now converts the exponent back to a Python integer. I'm planning to
release gmpy 1.10 in the next week or two. If you have time to test, that would 
be great.

Original comment by casevh on 18 Oct 2009 at 7:50

  • Changed state: Fixed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant