Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

talbotarr yields NaN with single element array argument [0.0] #5

Closed
moustachio-belvedere opened this issue Oct 8, 2018 · 3 comments · May be fixed by #23
Closed

talbotarr yields NaN with single element array argument [0.0] #5

moustachio-belvedere opened this issue Oct 8, 2018 · 3 comments · May be fixed by #23

Comments

@moustachio-belvedere
Copy link

moustachio-belvedere commented Oct 8, 2018

Apologies if this is expected behavior (if so I will try and put in a PR to the docs), but it was unexpected to me:

julia> using InverseLaplace

julia> Jbar(s) = (1/s^2)*(1 + s^(0.5))/(1 + 1/s+ s^(-0.5));

julia> InverseLaplace.talbotarr(s -> Jbar(s), [0.0, 1.0])
2-element Array{Float64,1}:
 0.02312633959753949
 0.7837570956002141

julia> InverseLaplace.talbotarr(s -> Jbar(s), [0.0])
1-element Array{Float64,1}:
 NaN

However,

julia> InverseLaplace.talbotarr(s -> Jbar(s), [1.0])
1-element Array{Float64,1}:
 0.7837570956002141

So the 0.0 argument works when the array size > 1, but fails when 0.0 is the only element.

And single-element arrays work fine when the single array element > 0.0.

I've reproduced this behavior on the most up to date versions compatible with Julia 0.6 and Julia 1.0.1. Using Windows 7, 64 bit.

@moustachio-belvedere
Copy link
Author

Investigating a bit further I now notice how sensitive the initial value is to the length of array:

julia> InverseLaplace.talbotarr(s -> Jbar(s), [0.0])
1-element Array{Float64,1}:
 NaN

julia> InverseLaplace.talbotarr(s -> Jbar(s), [0.0, 1.0])
2-element Array{Float64,1}:
 0.02312633959753949
 0.7837570956002141

julia> InverseLaplace.talbotarr(s -> Jbar(s), [0.0, 1.0, 2.0])
3-element Array{Float64,1}:
 0.032716993982694946
 0.7837570955988618
 0.8938591692411055

julia> InverseLaplace.talbotarr(s -> Jbar(s), [0.0, 1.0, 2.0, 3.0])
4-element Array{Float64,1}:
 0.04008394654876294
 0.7837570955988601
 0.893859169238478
 0.9368832630987496

julia> InverseLaplace.talbotarr(s -> Jbar(s), [0.0, 1.0, 2.0, 3.0, 4.0])
5-element Array{Float64,1}:
 0.0463010933271677
 0.7837570955988613
 0.8938591692383584
 0.936883263090815
 0.9580583504353659

@moustachio-belvedere moustachio-belvedere changed the title talbotarr yield NaN with single element array argument [0.0] talbotarr yields NaN with single element array argument [0.0] Oct 8, 2018
@moustachio-belvedere moustachio-belvedere changed the title talbotarr yields NaN with single element array argument [0.0] talbotarr yields NaN with single element array argument [0.0] Oct 8, 2018
@moustachio-belvedere moustachio-belvedere changed the title talbotarr yields NaN with single element array argument [0.0] talbotarr yields NaN with single element array argument [0.0] Oct 8, 2018
@jlapeyre
Copy link
Collaborator

Thanks for the report.

The inverse Laplace methods, and the Talbot method in particular, are prone to numerical instabilities. The Talbot method uses BigFloat internally. You should write

Jbar(s) = (1/s^2)*(1 + s^(1//2))/(1 + 1/s+ s^(-1//2))

Rather than

Jbar(s) = (1/s^2)*(1 + s^(0.5))/(1 + 1/s+ s^(-0.5))

The latter forces the internal calculations to have at best Float64 precision. The former is exact.

When evaluating the inverse transforms, you should use 0 and 1 rather than 0.0 and 1.0. Again, the latter results in computations with at best Float64 precision. Likewise, you should use numbers like 3//2 or 1//1000 if possible. You should compare this with inputs like big(0) and big(1), which forces all computation to use BigFloats. It may or may not make a difference. Using BigFloats internally is often important even if you only want a result to Float64 precision.

The talbotarr method is faster, but may be much less accurate than calling talbot on each element separately. It may not be clear from the docs, but the parameters used internally are shared by the input elements, which is why the output for a single element may depend on the others. I suggest avoiding talbotarr if possible.

But, even so, your function appears to return NaN for input 0

julia> Jbar(s) = (1/s^2)*(1 + s^(1//2))/(1 + 1/s+ s^(-1//2));

julia> tJbar = Talbot(Jbar);

julia> tJbar(0)
NaN

julia> Float64(tJbar(1//10000))
0.011283044418011255

@moustachio-belvedere
Copy link
Author

Many thanks for the thorough and clear reply John. I'll make sure all input is BigFloat compatible from now on!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
2 participants