New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add sine and cosine integrals #32
Conversation
Very efficient double precision floating-point evaluation via McLeod’s rational minimax Chebyshev approximations. julia> using SpecialFunctions julia> x = linspace(0, 50, 1_000_000); julia> @time sinint.(x); 0.040521 seconds (30 allocations: 7.631 MiB) julia> @time cosint.(x); 0.042092 seconds (30 allocations: 7.631 MiB)
src/sincosint.jl
Outdated
0.10223981202236205703E-17) | ||
elseif abs(x) ≤ 12.0 | ||
invt = inv(t) | ||
return sign(x)*pidiv2 - cos(x)/x*@horner(invt, 0.99999999962173909991E0, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can save a division by combining the /x
with the polynomial denominator. i.e. do cos(x) * @horner(...) / (x * @horner(...))
test/runtests.jl
Outdated
sinintvals = [0.9460830703671830149, 1.605412976802694849, 1.848652527999468256, 1.75820313894905306, 1.54993124494467414, 1.4246875512805065, 1.4545966142480936, 1.5741868217069421, 1.665040075829602, 1.658347594218874, 1.578306806945727416, 1.504971241526373371, 1.499361722862824564, 1.556211050077665054, 1.618194443708368739, 1.631302268270032886, 1.590136415870701122, 1.536608096861185462, 1.518630031769363932, 1.548241701043439840] | ||
cosintvals = [0.3374039229009681347, 0.4229808287748649957, 0.119629786008000328, -0.14098169788693041, -0.19002974965664388, -0.06805724389324713, 0.07669527848218452, 0.122433882532010, 0.0553475313331336, -0.045456433004455, -0.08956313549547997948, -0.04978000688411367560, 0.02676412556403455504, 0.06939635592758454727, 0.04627867767436043960, -0.01420019012019002240, -0.05524268226081385053, -0.04347510299950100478, 0.00515037100842612857, 0.04441982084535331654] | ||
for x in 1:20 | ||
@test SF.sinint(x) ≈ sinintvals[x] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
≈
is not strict enough here; it only checks that about half the digits are correct. Use ≅
(which is defined in runtests.jl
to check that 13 digits are correct.)
test/runtests.jl
Outdated
|
||
@test_throws ErrorException SF.sinint(big(1.0)) | ||
@test_throws ErrorException SF.cosint(big(1.0)) | ||
@test_throws DomainError SF.cosint(-1.0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Other things to test: a very small value, like x = 1e-300
, and a very large value like x = 1e300
. I'm a bit concerned that the latter will suffer from spurious overflow.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also test Inf
(should give π/2 or 0) and NaN
(should give NaN)
add and sharpen tests
Thanks for this. I added tests for large and small numbers and Infs and NaNs. Sharpening the tests, I also found that MacLeod had a typo in the second root of Ci(x). It's fixed and is accurate to the \cong test. |
src/sincosint.jl
Outdated
@@ -5,7 +5,8 @@ import Base.Math: @horner | |||
# This program computes the sine integral ∫_0^x sin(t)/t dt using the rational approximations of A.J. MacLeod, Numer. Algor., 12:259--272, 1996. | |||
function sinint(x::Float64) | |||
t = x*x | |||
if abs(x) ≤ 6.0 | |||
absx = abs(x) | |||
if absx ≤ 6.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like you could omit the abs(x)
call and just compare t ≤ 36
etcetera?
src/sincosint.jl
Outdated
elseif isnan(x) | ||
return NaN | ||
else | ||
return sign(x) * pidiv2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
copysign(pidiv2, x)
, and similarly elsewhere.
What about single precision cases? |
@musm, the PR supports single and half precisions by converting to/from Float64. Obviously, these cases could be made more efficient by using lower-degree rational approximants in single precision, but I'm skeptical that it's worth the effort for a specialized function like this; most users will probably want double precision. |
fair enough, just thought I'd bring it up if the PR author already spent time to code the 64bit version the single precision may not be too much more work. |
@MikaelSlevinsky, would be good to add comments giving (a) a citation to McLeod and (b) a note about your correction to Ci(x). |
src/sincosint.jl
Outdated
@@ -0,0 +1,258 @@ | |||
const pidiv2 = π/2 | |||
|
|||
import Base.Math: @horner |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
using Base.Math.@horner
since you're not extending the @horner
macro
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
changed to using
0.13754880327250272679E-14, | ||
0.10223981202236205703E-17) | ||
elseif t ≤ 144.0 | ||
invt = inv(t) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why not just 1/t
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think these are equivalent
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
better then to just use 1/t
since inv
is more for matrix inverses
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i don't know why it's defined if it can't be used in a completely equivalent case, but i don't care enough to discuss it further: it's changed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It can, i.e these are all equivalent and get compiled to the same code
1/2.0
1.0/2.0
inv(2.0)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@MikaelSlevinsky, it seems like that definition would fail (spurious underflow) for 0.1/(1e308+0im)
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/(x::Real, y::Complex) = (z = y/x; isinf(z) ? x/y : inv(z))
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Still fails for Inf / (1 + 1im)
(gives NaN + NaN*im
instead of Inf - Inf*im
). (And I assume that the nested x/y
calls the existing implementation, since otherwise you have a dispatch loop.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, sorry about the dispatch loop. Anyways, surely there are multiple corner cases, but it's a reasonable request that x/(y+0.0im) == x/y
. Many base functions behave like this (at least for input that isn't extreme in some sense):
julia> function realcomplextest(x::AbstractArray, f::Function)
for y in x
test = f(y) - f(y+0im)
if test != 0
println(y," ",test)
end
end
end
realcomplextest (generic function with 1 method)
julia> realcomplextest(linspace(0,50,10001), sin)
julia> realcomplextest(linspace(0,50,10001), cos)
julia> realcomplextest(linspace(0,50,10001), exp)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, that would be a nice property to have.
src/sincosint.jl
Outdated
@@ -0,0 +1,258 @@ | |||
const pidiv2 = π/2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe this is unnecessary, i.e just using the constant π/2
is sufficient and will be compiled to a constant in the function anyway... can check comparing @code_llvm
or similar
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
removed
This reverts commit e14d654.
src/sincosint.jl
Outdated
|
||
""" | ||
cosint(x) | ||
Compute the cosine integral function of `x`, defined by ``\\operatorname{Ci}(x) := \\gamma + \\log x + \\int_0^x \\frac{\\cos t - 1}{t} dt`` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is gamma here? Also, should put newlines after the function signatures in docstrings.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
gamma is the Euler-Mascheroni constant, eulergamma
in Julia. I can add this to the definition.
I just tried to replicate the style of current repository https://github.com/JuliaMath/SpecialFunctions.jl/blob/master/src/erf.jl#L40, but i now see that it's inconsistent.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I'd add something like "where ``\\gamma``
is the Euler-Mascheroni constant" to the docstring.
Very efficient double precision floating-point evaluation via McLeod’s rational minimax Chebyshev approximations. Closes #31