-
Notifications
You must be signed in to change notification settings - Fork 17.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
proposal: math/big: add Float.Pow, Float.Exp, Float.Ln? #14102
Comments
Marking as 1.7 to make a decision--I have no opinion on what that decision should be. |
I'll add computing Mandelbrot fractals with big.Float as motivation. Ideally a whole new big.Complex type, :-) or at least big.Float.Sqrt, and possibly big.Float.Exp too. See for instance exercise 3.8, page 63, of The Go Programming Language book: "Rendering fractals at high zoom levels demands great arithmetic precision. Implement the same fractal using four different representations of numbers: complex64, complex128, big.Float, and big.Rat. How do they compare in performance and memory usage? At what zoom levels do rendering artifacts become visible?" (Disregard big.Rat, it's way too slow for this kind of use anyway.) Also, see this go-nuts discussion: math/big pow() and floor() help. |
@teknico Note that using complex numbers for Mandelbrot may make the implementation simpler, but it will also run a lot slower because there are a lot of shortcuts that are possible when dealing with the real and imaginary parts explicitly. It does make sense to have Sqrt, and/or perhaps Exp instead. We have been playing with implementations, but high-quality implementations that scale to arbitrary precision (ivy has an upper internal limit) are hard. We will get to it when we get to it. |
FWIW I have basic I haven't tested them extensively and obviously they're not as good as they would be if implemented by Robert or other go devs, but -if needed- they could be used as a starting point. |
@ALTree Thanks for that information. I've played with respective implementations a while back but they were not satisfactory. I'll have a look at your code when I pick this up. |
Any progress on this? |
@infogulch No, sorry. |
Not for 1.9 either. |
These are tricky because you need some kind of arbitrary-precision constant generator for at least 'e'. Maybe they could be somewhere else? Sqrt on the other hand is trivial and maybe we should support it. We did add Int.Sqrt not too long ago. |
If we want a |
@ALTree add a new issue for Float.Sqrt? |
Done, it's #20460 |
I made a naive implementation of this here. It's really short and it can technically generate an arbitrary precision, but it's really slow. |
@infogulch the problem with that approach is that it scales poorly. 44s for 10000 decimal digits on my machine, it's far too slow to be usable. The annoying part of this is implementing methods that do well (at least) in the 1000 - 100000 decimal digits range. For reference my proposed implementation (that I linked above) computes 10000 decimal digits in about 500 milliseconds.
|
The algorithm in ivy takes about 1.0s for that. I'm impressed it's so fast - it's just using a Taylor series and calling math.big from the outside. |
That's quite fast. The best approach is likely to have the implementation switch after a given threshold. Series methods are quite good for small precs (up to ~5000 decimal digits?), but for example I tried with 340000 bits (necessary for 100k decimal digits), and the above snippet gives the answer in about 7 seconds, which is much faster that what a Taylor approach could do, I suspect. I stopped ivy after 2 minutes. |
It's probably too late for 1.10; also needs decision. Moving to 1.11. |
@ALTree FWIW, I've had good luck with continued fraction representations for large numbers of decimal digits. |
Sqrt was really easy; for the others it's unclear what we can say about how good the answer can or should be. Does anyone know more about proper algorithms for computing these? I am thinking of something at the level of detail of https://members.loria.fr/PZimmermann/mca/mca-cup-0.5.9.pdf (maybe they're in there but I don't see them in the table of contents). |
We can't really answer "should we add this?" without knowing how. |
From my experience with radix 10 big algorithms:
I found https://link.springer.com/book/10.1007%2F978-1-4020-6949-9 to be beneficial for computing the continued fractions. This is pretty good for Sqrt, too: https://doi.org/10.1145/214408.214413. WRT And for
The one thing I like about the above |
@rsc The Brent, Zimmerman book you linked has a simple method for the natural logarithm in section 4.8.2, which I implemented in Go a few months ago. I can submit the code (just for discussion) if you wish. The book also suggest a way of implementing Exp from the Log implementation via Newton (section 4.2.5). I also already have this in Go, which I can submit. Once you have Exp and log, you can do Pow using the standard |
@ALTree is this something you'd wanna collaborate on? I wrote a quick and super ugly
Aside: |
@ericlagergren thanks for writing the benchmarks. I have a few observations.
Note how decimal.Big and bigfloat agree, but the big.Float CF result is incorrect. I'm not sure if the big.Float CF implementation is limited to a single word, but in any case at the moment we can't benchmark it on precisions bigger than 64 : )
Switch to precision 10000 (3011 for the decimal) and:
Switch to precision 100000 (30103 for the decimal):
As I wrote above
The question is: how small is that threshold? Currently, for exp, is 2048, but Anyway:
sure. My suggestion (if this proposal is accepted) is to go simple first (a single method), and then evaluate the need for more complicated implementations that switch methods after a given threshold. If the win, on low precisions, is huge, then it may be worth it. |
Oh, it’s probbaly not entirely correct. There usually needs to be some fiddling with precision and I just guesstimated the epsilon. (It literally took about 10 minutes to write, just wanted to test it out.) I wouldn’t compare the decimal version to bigfloat—decimal won’t scale as well because it uses a binary big.Int for I’m down for whatever. I think there’s enough ideas and actual implementations around to satisfy @rsc 🤞 |
(To clarify “binary” slowness for decimals: |
FWIW, we talked to @robpike at the proposal meeting and even he's not sure that the Ivy implementations are last-bit correct. Knowing about Taylor series is OK but knowing when to stop is hard. It's also very slow. This is not really in Go's wheelhouse and we don't know how to do it right. We would love to see this functionality in the Go ecosystem, just not necessarily in the standard library, inside math/big. It seems like third-party packages could take care of this need for the time being. Note that it's fine to take the ivy code and pull out the routines you want as a separate package (preserving the license, of course). Note also that sometimes the last bit may not be right in that code. |
The main problem with the ivy code is that the trigonometric functions have trouble reaching perfect 0.
The other functions seem accurate. Log is slow. The real point here is that a numeric algorithms expert is required for getting the bits guaranteed right, and we don't have one. An external implementation is welcome. |
There is already an implementation in @robpike's excellent ivy calculator. However, the code is quite complex and lacks test cases.
Would it make sense to move it to the math/big package?
The text was updated successfully, but these errors were encountered: