Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Have an exp for pure imaginary numbers #5625

Closed
jakirkham opened this issue Mar 2, 2015 · 17 comments
Closed

Have an exp for pure imaginary numbers #5625

jakirkham opened this issue Mar 2, 2015 · 17 comments

Comments

@jakirkham
Copy link
Contributor

It would be nice to have a pure imaginary exp implemented like npy_cexp, but only for pure imaginary arguments (maybe npy_iexp or npy_ciexp). It would take a real value $\theta$ and evaluate $e^{i\cdot\theta}$. The hope is this would be a little bit faster in cases where it is know there is no real part to the complex number.

@charris
Copy link
Member

charris commented Mar 2, 2015

There have been previous proposals/discussions for a (cos, sin) function, and, IIRC, an expj function (this one). Might try a search of the mailing list archives.

@jakirkham
Copy link
Contributor Author

Alright, thanks. I'll take a look. Do you know off hand if there is a reason previous proposals haven't been incorporated?

@argriffing
Copy link
Contributor

@ewmoore
Copy link
Contributor

ewmoore commented Mar 3, 2015

I'd bet part of it is that the gains aren't very large relative to just
calling sin and cos since the portable implementation is to call sin and
cos.

On Monday, March 2, 2015, jakirkham notifications@github.com wrote:

Alright, thanks. I'll take a look. Do you know off hand if there is a
reason previous proposals haven't been incorporated?


Reply to this email directly or view it on GitHub
#5625 (comment).

@jakirkham
Copy link
Contributor Author

@argriffing Thanks. I'll take a look. Though I still see some value in having it in NumPy.

@ewmoore I'm not sure if you are referencing the implementation in C or Python. If you mean the latter, taking a few simple examples I find using numpy.sin and numpy.cos to 33% slow than simply doing numpy.exp with 1j multiplying my value. My speculation is the creation of temporaries hurts here. If you were meaning the prior, I would hope skipping steps that handle the real value would cut down on the cost of the function in C. If I have completely missed you point, feel free to clarify.

@charris
Copy link
Member

charris commented Mar 3, 2015

There is a void sincos (double x, double *sinx, double *cosx) function in gcc, it is an gnu extension.

@jakirkham
Copy link
Contributor Author

@charris Interesting. So, are you thinking that exposing a form of sincos in NumPy would be worthwhile? I think there is something similar in clang.

@charris
Copy link
Member

charris commented Mar 3, 2015

Yes, I think it would be worthwhile, there are some common operations involved, reduction of interval, etc. Probably the easiest way to expose it would be as expj with a real argument, although one could also return two arrays. Should probably be discussed on the list, again ;)

@juliantaylor
Copy link
Contributor

note glibc internally already uses sincos for the computation of cexp, so there is not much to gain.

@jakirkham
Copy link
Contributor Author

@juliantaylor but the NumPy version does not seem to use cexp, but calls sin and cos separately npy_math_complex.c.src#L230-L231. Unless, this is something the compiler can optimize out, which I don't know, I would think using sincos or cexp would be better.

@juliantaylor
Copy link
Contributor

that is the fallback version for when libc does not provide the function. It should only be used on windows. On windows you also should not have sincos available.
A decent compiler (like gcc) can optimize a call to sin and cos to a sincos, unfortunately that does not work when using the npy_ indirection.

@jakirkham
Copy link
Contributor Author

Ah ok, sorry, so is there some other code I should be looking at?

@juliantaylor
Copy link
Contributor

should be

@jakirkham
Copy link
Contributor Author

Alright, thanks for your help. I'm going to close this as using cexp is the best that could be hoped for here. If someone disagrees, they are welcome to reopen the issue.

@juliantaylor
Copy link
Contributor

glibc does not do a real = 0 optimization so there is still a exp(0) call in there that makes some overhead.
I wonder if they would accept this optimization, I think it makes sense as getting the sincos use right is probably not that simple, floating point has a lot of special cases.

@juliantaylor
Copy link
Contributor

I sent a patch, not super significant but measureable
https://sourceware.org/ml/libc-alpha/2015-03/msg00139.html

@jakirkham
Copy link
Contributor Author

@juliantaylor, the cexp(0+a*I) was one of my original concerns as well. It seemed like the C99 spec didn't guarantee this optimization. Thanks for sending the patch.

I don't know if this sort of thing still has an effect on performance, but it probably could be shortened to a ternary expression.

On second thought, there should no longer be a need for the multiplication at the end in the event that the real part is 0. Since we have added the branch anyways, we might as well use it. I bet this has nearly no impact though.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants