Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFC: Add error checking to Amos package-based Airy and Bessel methods. Correct # arguments in ccall to :zbiry. #4967

Merged
merged 1 commit into from Jan 21, 2014

Conversation

simonp0420
Copy link
Contributor

Completes changes proposed in issue #4172 and addresses issue #4915.

RFC: As currently implemented, the new optional arguments are accepted only by methods that call
Amos routines. So, e.g., a call like besselj0(1+0im, ignore_errors=false) works fine, but
besselj0(1, ignore_errors=false) will result in a Julia error:

ERROR: no method besselj0(Array{Any,1}, Int64)

We could instead choose to allow the optional arguments to be accepted in all calls to Bessel
functions, but for Bessel functions of the first and second kind of integer order called
with real arguments they would not have any effect.

Another option is to completely eliminate the optional arguments and simply take the current default
behavior in all cases.

@stevengj
Copy link
Member

Is the performance overhead of using keywords (as opposed to optional positional arguments) negligible here? I ask because Bessel functions can be performance-critical inner-loop functions.

@simonp0420
Copy link
Contributor Author

Oops, I hadn't checked this :$. As I'm sure you suspected, there is a huge speed penalty for using the keyword arguments. To investigate this, I created two more branches of the code: one where the optional keyword arguments are replaced by optional positional arguments, and one where the optional arguments are completely removed. In this last version I eliminated the check on underflow, and call error whenever the Amos error return is nonzero. I tested these using the following code:

First, a loop version:

@time for k=1:1e6; t = besselj0(0.2+0.2im); end # for Julia 0.2 and no-optional-arguments branch
@time for k=1:1e6; t = besselj0(0.2+0.2im,ignore_errors=false); end # for keyword arguments branch
@time for k=1:1e6; t = besselj0(0.2+0.2im,false); end # for positional arguments branch

and then an array version:

a = squeeze(repmat([0.2+0.2im],int(1e6),1),2); @time b = besselj(0,a); # for Julia 0.2 and no-optional-arguments branch
a = squeeze(repmat([0.2+0.2im],int(1e6),1),2); @time b = besselj(0,a,ignore_errors=false); # for keyword arguments branch
a = squeeze(repmat([0.2+0.2im],int(1e6),1),2); @time b = besselj(0,a,false); # for positonal arguments branch

I also compared to the following Matlab code run on Matlab R2013a on the same machine:

tic; for k = 1:1e6, b = besselj(0, 0.2 + 0.2i); end, toc  % Loop version
a = rand(1000000,1) + 0.2i; tic; b = besselj(0,a); toc  % Array version

Here are the results (timings over multiple runs can vary by up to about 15%):

Time (s) Matlab Julia 0.2 No Optional Arguments Positional Arguments Keyword Arguments
Loop 3.38 0.46 0.55 0.69 3.06
Array 0.05 0.48 0.56 - -

There are no "Array" entries for the cases with optional arguments because the code doesn't support arrays yet (currently coded using @vectorize_1arg and @vectorize_2arg, which apparently don't support additional optional arguments). Clearly, the cost of using keyword arguments is far too great. Positional arguments increase the run time wrt Julia 0.2 by about 50% and even implementing error checking without adding any additional arguments increases the run time by about 20%. This is quite surprising to me, considering the amount of work that the Amos routine must do to compute its answer. The only code added to the _besselj method in math.jl in this case is

        if ae[2] > 0
            error("ZBESJ: $(amos_error[ae[2]])")
        end

Regarding the Matlab results: Of course one would not use the loop version but the fact that the array version is an order of magnitude faster than Julia 0.2 is also a big surprise to me. Matlab uses the same Amos routines as Julia, so why is it so much faster? It's hard to believe that the Intel compiler produces code that is 10x faster than gfortran. I have the Intel compiler so perhaps a little investigation at the Fortran level is indicated.

When I started this I didn't have a good feel for the types of errors that were being neglected by the Julia calls to the Amos routines. I'm not aware of an application where it is important to identify the underflow condition versus setting the result to zero, so IMO this test could be dispensed with. And after playing with the code for a while, the type of error returns I've seen are:

  1. Overflow where the Amos routine will return a very large result where Matlab returns Inf.
  2. Complete loss of accuracy due to large argument or order where the Amos routines returns 0, as does Matlab.
  3. Input error where the function argument is a known singularity, such as besselh(1,0). In this case, without checking the error return, Julia 0.2 returns whatever the previous call to the Amos function returned:
julia> besselh(1,10)
0.043472746168861424 + 0.24901542420695402im

julia> besselh(1,0)
0.043472746168861424 + 0.24901542420695402im

which IMO is a quite serious error. Matlab consistently returns NaN here.

Based on 3. above, it does seem important to do the error checking, but with be a speed penalty of at least 20%, depending on the implementation selected.

Looking forward to hearing more discussion and advice on this.

@nolta
Copy link
Member

nolta commented Dec 3, 2013

Let's go w/ the "no optional arguments" branch.

@simonp0420
Copy link
Contributor Author

Agreed. I will get this finalized and resubmitted in the next day or two.

Do you think we should throw a DomainError in the case of illegal arguments or just return NaN as the real-valued Bessel functions of integer order seem to do? Same question for overflow: Should the routine throw an OverflowError or return Inf?

By the way, I found out why Matlab is so much faster when calculating a large number of Bessel functions. It threads the calculation. I noticed that all the CPUs are maxed out when Matlab’s besselj is passed a large matrix or vector of arguments to be evaluated.

--Peter

From: Mike Nolta [mailto:notifications@github.com]
Sent: Monday, December 02, 2013 5:33 PM
To: JuliaLang/julia
Cc: simonp0420
Subject: Re: [julia] WIP: Add error checking to Amos package-based Airy and Bessel methods. Correct # arguments in ccall to :zbiry. (#4967)

Let's go w/ the "no optional arguments" branch.


Reply to this email directly or view it on GitHub #4967 (comment) . https://github.com/notifications/beacon/4kPCveyhxub6xNf4ifJeMp5bsi-8ZeAC5fLcUCYoPwAEKdFsHvE2gFek5Ll_dcx2.gif

@nolta
Copy link
Member

nolta commented Dec 3, 2013

Not totally sure, so let's go w/ matlab's behavior for now.

@jiahao
Copy link
Member

jiahao commented Dec 3, 2013

You can explicitly disable threading in Matlab by starting it with the -singleCompThread command line option.

This has come up multiple times and should probably in a FAQ somewhere, along the lines of "Matlab is faster... but I'm not sure if it's because of parallelization".

@jakebolewski
Copy link
Member

I think this could be expanded to a discussion about benchmarking in general. Often new users are experimenting with Julia because of its claimed performance. Pointing out how to avoid these types of pitfalls would be valuable.

@simonp0420
Copy link
Contributor Author

@jiahao Thanks for the tip. I did some tests on a Windows machine where I have the Intel Visual Fortran compiler. Single-threaded, Matlab takes about 0.53 s for the array test. On this machine, Julia 0.2 takes about 0.47 s for the array test, and when I compile zbesj.f and its dependencies into a DLL using the Intel compiler and called this from Julia, the array test takes 0.3 s!. So Julia looks pretty good for single-threaded performance.

I presume that eventually, Julia's built-in functions will also be threaded.

@ViralBShah
Copy link
Member

Is there a performance difference when throwing DomainError and OverflowError versus returning NaN and Inf?

@simonp0420
Copy link
Contributor Author

I don’t believe there is any difference for normal (no-error) code flow because the conditional tests will be similar or identical. Of course, if an error is thrown and not catch’ed, then execution will abort. I suspect I’m not answering the question you intended; could you please elaborate?

@ViralBShah
Copy link
Member

I guess you did answer the question - that this should not have a performance difference. I think it is better to throw the errors.

@simonp0420
Copy link
Contributor Author

Hmm, being quite new to this, I'm very hesitant to proceed without hearing consensus from the development team. Any suggestions? Anyone care to make arguments in favor of one approach or the other?

I will note that the current behavior of the openlibm-based Bessel functions of the first and second kind of integer order and real argument is to return +/-Inf in some cases and throw a DomainError in other cases (as opposed to returning NaN):

julia> bessely(0,-1)
ERROR: DomainError
in bessely at math.jl:627
in bessely at math.jl:615

julia> bessely(0,0)
-Inf

@simonp0420
Copy link
Contributor Author

Assuming that we go with returning NaN or 0 or Inf, then implementing this won't be as simple as I thought... Consider the behavior of airy(z) for moderately large |z|:

julia> airy(100)
2.6344821520883423e-291

julia> airy(100im)
4.0485495506993625e203 - 2.5283665505877898e203im

Along the real axis the result is approaching zero and along the imaginary axis it is approaching infinity. For |z| very large, the underlying Amos routine will not bother to do any calculation, just return an error code, so the function return value you get back from Julia 0.2 is whatever was sitting in memory:

julia> airy(1)  # This result is correct
0.13529241631288147

julia> airy(1e20)  # This should return 0
0.13529241631288147

julia> airy(1e20im)  # This should return complex(Inf,Inf) or better yet complex(Inf, -Inf)
0.13529241631288147 + 0.0im

In both the second and third calls above, the Amos routine called by Julia is returning the same error code: IERR = 4, "Complete loss of accuracy due to large magnitude argument", which is being ignored by Julia. To this point, I have assumed that a too-large argument should result in a function return value of 0.0+0.0im in all cases. This example shows that assumption is wrong. It seems to me that to generate the right answer, the corrected Julia code will have to incorporate logic in this case that depends on the behavior of the particular Airy or Bessel function being evaluated, and on the signs and magnitudes of the real and imaginary parts of the function's argument. This will take some time to work out the details for each of the nine functions. I guess the extra execution time needed to run through this logic is not too important, since it will only be needed in case of an error return from the Amos function, which is a comparatively rare event.

Comments/critique welcome!

@jiahao
Copy link
Member

jiahao commented Dec 4, 2013

For large magnitudes of z it would probably be more effective to just calculate the Airy function using the well-known asymptotic limit formulae. There's some experimentation to be done to find where the crossover point should lie, but hopefully the complex elementary functions (#2891) should take care of all the necessary branch cuts and signed infinities.

Airy functions are entire, i.e. defined everywhere on the complex plane. It would be mathematically wrong to throw a DomainError for an entire complex -> complex function. Also, I don't see how it makes sense to throw an OverflowError unless the answer is not one of the signed complex infinities. (This could happen if the infinity has an undefined phase.) The analogy to how bessely(0,-1) throws a DomainError is fallacious since this is the correct result if bessely is restricted to real ranges (as is the convention in Julia, viz. sqrt(-1), that the domain also defines the range). The answer for this value happens to be complex, as you can verify in Mathematica.

N[BesselY[0, -1], 20]
0.0882569642156769580 + 1.5303953731159331029i

@simonp0420
Copy link
Contributor Author

@jiahao Thanks for your astute comments. Many of the functions evaluated in the Amos package are entire, and as you said, these should not generate a DomainError anywhere in the finite complex plain. For other functions that are analytic but have poles, probably Inf is the correct return value for evaluation at the pole. If we want the Julia functions to also do the right thing when presented with various flavors of Inf (real or complex, with positive or negative real and/or imaginary parts) as arguments, then there may be some indeterminate cases, or cases that throw a DomainError e.g., when evaluating cos(Inf) as part of the large-argument asymptotic formula. (FWIW, for cos(Inf) Julia throws a DomainError and Matlab silently returns NaN.) It's ironic that the Amos routines use these same asymptotic formulas for large |z|, formulas which may have to be duplicated in Julia code for cases where the Amos routines do not attempt any calculation. I guess that the Amos routines were written prior to the introduction of the IEEE floating point standard which includes NaN and Inf, so Amos had to "roll his own" exception handling.

@simonp0420
Copy link
Contributor Author

I apologize for the long delay since my last update--I can only work on this in my spare time and the scope of the changes has grown from what I originally anticipated.

I just added a commit that removes the optional arguments for all Amos-based Bessel and Airy functions. For both the Bessel and Airy functions, complex(Inf,Inf) is returned when the Amos library routine error code indicates overflow. Also, for both sets of functions when the Amos routine indicates an illegal input complex(NaN,NaN) is returned. The behavior of the Airy functions differs in the case where the Amos error code indicates complete loss of significance due to too-large input. The Bessel functions return a NaN in this case, but the Airy functions call one of four new "big" functions (airyaibig, airyaiprimebig, airybibig, and airybiprimebig), based on the asymptotic series for large arguments that @jiahao suggested using in his above comment. These routines are written in pure Julia and rely on Julia's basic arithmetic and transcendental functions to return NaN or Inf as appropriate.

Please take a look at this IJulia notebook to see accuracy tests, timing results, and discussion of improved input range for these "big" functions. The notebook and the modules that it uses are available at this Gist.

One could generate similar large-argument functions for the complex-argument Bessel functions using other asymptotic series that are provided at the NIST DLMF web site, and I would be willing to work on this, but I would like to hear from the core developers whether or not they think this is a good idea. The "big" Airy routines add several hundred lines to base/math.jl, and the benefit they provide (see the last section of the notebook) may not be significant enough to warrant inclusion.

If the senior developers would prefer to discard the "big" routines and just return NaN instead, I would be fine with that. I'm learning a lot from this entire experience, from using GIT and participating in open source development, to refreshing hazy memories about irregular singular points and behavior of entire functions at infinity!

@simonp0420 simonp0420 closed this Dec 9, 2013
@simonp0420 simonp0420 reopened this Dec 9, 2013
@JeffBezanson
Copy link
Sponsor Member

Does this look good to everybody now?

@simonp0420
Copy link
Contributor Author

I'm not sure how to interpret the lack of responses to my last set of changes and to Jeff's query. What is the appropriate thing to do at this point? Close this PR and open a new one with the same proposed modifications, but annotated with "RFC" rather than "WIP"? Or wait until other reviewers have time to look at this?

@StefanKarpinski
Copy link
Sponsor Member

This is clearly a good change, I think it's just a lot to review and may have fallen off of people's radar.

@ViralBShah
Copy link
Member

This does look good, but it would be great if @andreasnoackjensen or @stevengj could take a look and merge this.

@jiahao
Copy link
Member

jiahao commented Dec 25, 2013

A few technical, nitpicky points:

complex(Inf,Inf) is returned when the Amos library routine error code indicates overflow.

This implies that you know what the phase of complex infinity is, i.e. that it lies in the first quadrant. However, this is not necessarily true for the Airy functions since they oscillate, so it might be better to go with OverflowError or complex(NaN, NaN) since the phase is indeterminate.

Also, for both sets of functions when the Amos routine indicates an illegal input complex(NaN,NaN) is returned.

The use of NaN here violates IEEE standards, where NaN has a precise meaning relating to the propagation of the results of computations which have no representable floating-point value (the classic example being indeterminate forms like 0/0). Therefore, the only time an implementation of a complex entire function should return a NaN is when it receives a NaN input. I don't agree that NaN should be abused as a synonym for undefined output corresponding to bad input. Perhaps DomainError is more appropriate here.

The Bessel functions return a NaN in this case

I assume that this is really a placeholder for the correct asymptotics, since Bessel functions are also entire and have also well-defined asymptotic limits which don't throw NaNs.

@jiahao
Copy link
Member

jiahao commented Dec 25, 2013

Thanks for doing this, btw. People are busy and they need prodding to revisit things, so I wouldn't take slow responses personally.

Clearly the general intention is good and we would eventually want to have this functionality. My comments above are to make sure that we are doing things correctly as far as possible.

@StefanKarpinski
Copy link
Sponsor Member

In general, we should raise errors wherever we explicitly check for problems like overflow and NaNs. Nobody really thinks that passing NaNs around is a better programming model, it's just better for hardware implementation. Turning an exceptional condition into something like NaN is perverse – it makes a failing computation continue and forces the error checking to happen later when it's no longer clear where the problem originated.

@simonp0420
Copy link
Contributor Author

@jiahao Thanks for the careful look and useful critical feedback. The more of this I receive, the better, as far as I'm concerned. I share your goal of wanting to get it right. And I appreciate everyone's patience while I iterate towards the right solution.

Why is large real argument special-cased?

I wasn't actually concerned with overflow here. The reason I included this test is because of the factors cos(zeta - pi/4) and sin(zeta - pi/4) that multiply the two series. For real(zeta) > 1/eps the granularity between consecutive double precision floats is getting so large (>= 1) that the values returned by the trig functions become unreliable:

julia> t = 1/eps(Float64)
4.503599627370496e15

julia> [cos(t+n*pi) for n=0:2:6]  # Should all be equal
4-element Array{Any,1}:
 -0.485535
 -0.221926
 -0.807914
 -0.611076

The cosine function is evaluating its argument correctly, but the argument itself is inaccurate. I wasn't (still am not) sure of the right way to handle this (or even whether I should concern myself with it), but I included it in the case where NaN is returned. After reading your further comments I now understand that returning a NaN here is definitely not correct since the small, finite result of the trig function evaluation could certainly be represented accurately as a floating-point value. I'm not sure that throwing a DomainError is right here either, since as you previously pointed out, the function is entire. Perhaps I should do nothing and just accept the inevitable error? I would appreciate some advice here.

Have you checked to see if you get better precision by grouping the odd and even terms together in the summation? Splitting a sum like this is practically guaranteed to lose precision when you subtract two large numbers.

Please take a look at e.g., http://dlmf.nist.gov/9.7#E9 . These are two separate series, each of which is evaluated as the sum of a few (less than 10 in practice) small (because of the inverse powers of the large quantity zeta), rapidly decreasing terms. For maximum accuracy evaluation of the asymptotic series, the sums are terminated whenever the terms start to grow in magnitude (due to the numerators, which start out decreasing and eventually begin to grow rapidly). I don't believe that cancellation is an issue here. Because the terms for the two series may begin to increase in magnitude at different values of the summation index, IMO it is not desirable to combine them into a single series.

Regarding your later comments, I think you're right on the mark. My use of NaN and choice of complex Inf weren't right. And yes, as you surmised, the NaN in the case of Bessel functions is a temporary expedient because the asymptotics have not yet been coded. That will be quite a bit more work, and I can't get away from the feeling that recoding all these asymptotics in Julia, when they have already been implemented in the underlying Fortran code, is not really the right approach. But modifying the Amos codes to handle these cases correctly is pretty daunting to me, at least. I've tried a bit to locate the original Sandia tech reports that describe the algorithms and logic used in the Amos codes but haven't been able to find them. Without them I wouldn't want to even think about modifying the Fortran library, so I guess doing it in Julia is the only feasible approach.

@StefanKarpinski Prior to this recent flurry of comments, I had one request to implement the Matlab approach (returning NaNs) and one request to throw exceptions, and I went with the Matlab approach. After hearing from you, @ViralBShah, and @jiahao, and reading the discussion in issue #5234, it's clear that I should modify the code to throw errors rather than silently returning NaNs, and that is what I'll do.

It will probably be about 2 weeks until I can do more serious work on this--I'm going out of the country for a family vacation with my family. Thanks again to all commenters!

@StefanKarpinski
Copy link
Sponsor Member

I'm not sure if it's the right approach, but we do have cospi and sinpi functions which give correct answers here:

julia> t = 1/eps(Float64)
4.503599627370496e15

julia> [cos(t+n*pi) for n=0:2:6]
4-element Array{Any,1}:
 -0.485535
 -0.221926
 -0.807914
 -0.611076

julia> [cospi(t/pi+n) for n=0:2:6]
4-element Array{Any,1}:
 -0.707107
 -0.707107
 -0.707107
 -0.707107

@StefanKarpinski
Copy link
Sponsor Member

Hmm. Unfortunately, that doesn't work out quite right:

julia> cos(t)
-0.4855348677422206

julia> cospi(t/pi)
-0.7071067811865476

The cos(t) answer is correct according to Wolfram|Alpha, whereas the cospi(t/pi) answer is not: t/pi = 1.4335402848056648e15 – precisely the integral value 1433540284805665. Since all integral values should give ±1 when cospi is applied to them, this seems to be a bug in cospi.

@StefanKarpinski
Copy link
Sponsor Member

No, sorry, this is actually a rationalize bug:

julia> t/pi
1.4335402848056648e15

julia> rationalize(ans)
1433540284805665//1

julia> float(ans)
1.433540284805665e15

This should round-trip to the same value.

@jiahao
Copy link
Member

jiahao commented Dec 27, 2013

Summary: part of this PR is a bugfix that actually traps the errors from Amos's routines, and the extra code written more recently to handle the large |z| asymptotics doesn't actually improve upon Amos's code.

I think going forward, we should just accept the bugfix part after changing NaN return values to throwing errors and removing the special-casing of large z. Clearly a new algorithms would be needed to improve upon the existing library routine.

@simonp0420
Copy link
Contributor Author

Sounds good. I'm about to leave for the airport--look for these revisions in about 2 weeks.

@simonp0420
Copy link
Contributor Author

@jiahao Just thought of this: The case where the Amos routine immediately returns without calculating anything because it's input argument is too big. What is the appropriate error to be thrown in this case? Recall that presently the proposed Bessel routines return NaN and the Airy routines use the new large |z| asymptotics.

@jiahao
Copy link
Member

jiahao commented Dec 27, 2013

What is the appropriate error to be thrown in this case?

Do you have a sense of roughly how large |z| is when this happens? It really depends on the domain of |z|. Some places, the Airy functions have very well-defined limits Ai(z) --> 0 and Bi(z)-->Inf and should return those; in the region where they have the oscillatory asymptotic expansions, we can choose to

  1. force the computation anyway using the asymptotic expansions, even if the result will be imprecise,
  2. do whatever we decide to do in NaN vs wild (or, what's a DomainError, really?) #5234 (either return NaN or throw DomainError, reflecting the inability of the underlying algorithm to compute a value in this domain).

By the way, cos, sin and tan happily compute answers for prevfloat(Inf) and nextfloat(-Inf). So it looks like the convention is to assume that the error in z is exactly 0 rather than attach an intrinsic error due to the granularity of FloatingPoint.

@fabianlischka
Copy link
Contributor

Hi, don't really have the full context here, but three quick comments:

  1. Totally agree that evaluating periodic functions for very large arguments (when epsilon becomes large relative to the period) is not only tricky, but approaching meaninglessness. However, one sort of has to assume that the floating point argument one gets is what the user intended, and do the best one can do - what other choice is there (short of throwing a domain error - which sin, cos etc. don't...)?
  2. @jiahao : "what we've seen is that z->cos(mod2pi(z)) does a much better job for large z than z->cos(z)"
    That's actually not quite right. The trigonometric functions have range reduction built in, and instead of reducing to [0,2pi] they reduce to [-pi/4, pi/4], and then use the further knowledge of what quadrant they're in to flip sin and cos, and flip signs, as required. So, cos(z) is going to be more precise than cos(mod2pi(z)), by up to 2 or 3 bits.
    On the other hand, if you compare z->cos(z+pi/4) with z->cos(mod2pi(z) + pi/4), the latter might be much more accurate, because for large z (with correspondingly large eps) the addition of pi/4 might lose considerable precision.
    That leads to my final suggestion:
  3. Could one throw in some trigonometric substitutions, e.g. using sin(z+pi/4) = (sin(z) + cos(z))/sqrt(2) to avoid these effects?

@andreasnoack
Copy link
Member

@ViralBShah I don't think I can add anything to this discussion.

@jiahao
Copy link
Member

jiahao commented Dec 31, 2013

@fabianlischka

cos(z) is going to be more precise than cos(mod2pi(z)), by up to 2 or 3 bits.

This statement is correct only if the evaluation is not prone to overflow. However, we are definitely in the overflow-prone region, as confirmed by computing the cosine using 256-bit floating point.

@fabianlischka
Copy link
Contributor

@jiahao - I'm pretty sure that cos(z) is going to be as precise as or more precise than cos(mod2pi(z)), by up to 2 or 3 bits, for z in the entire range of Float64.
The computation you've done above seems to be comparing z->cos(z+pi/4) with z->cos(mod2pi(z) + pi/4), and there the latter might well be more precise --- but that's due to the addition, not the cosine.

@jiahao
Copy link
Member

jiahao commented Dec 31, 2013

@fabianlischka I see your point; so it would make more sense to use a trig identity to evaluate cos(z+pi/4) and sin(.).

@StefanKarpinski
Copy link
Sponsor Member

I tried a couple of trig identities too and the mod2pi approach was the best, but we should probably try a few other things too.

@jiahao
Copy link
Member

jiahao commented Dec 31, 2013

I think I managed to break WolframAlpha trying to verify this.

@fabianlischka
Copy link
Contributor

@jiahao - haha, nice :-) Not easy to break Alpha....

While verifying my mod2pi etc. I used this helper function a lot:
https://gist.github.com/fabianlischka/8199483

I avoided using decimal representation of a Float64 with Wolfram Alpha. While Julia (like most languages) will interpret floating point literals as the Float64 that is closest, Wolfram Alpha will not, but will interpret it as the decimal as entered. The helper function above would give me an expression that evaluates to the Float64 exactly.

I'd be inclined to try trig identities (sin(z+pi/4) = (sin(z) + cos(z))/sqrt(2)), but I don't have enough background here - I know neither Bessel nor Airy functions...

@simonp0420
Copy link
Contributor Author

I'm back from vacation and ready to tackle this once more. As suggested by @ViralBShah, I'll restrict my efforts to bug fixing. As I see it, the two bugs that need to be addressed are:

  1. Incorrect documentation of airy(k,x) and incorrect call to Amos routine ZBIRY as described in issue airy(k,x) documentation and implementation #4915.
  2. Julia's bessel and airy function methods based on Amos library routines sometimes return erroneous results because nonzero return values of the Amos error flags are ignored.

Bug 1 above is already corrected satisfactorily in this PR. For Bug 2 I propose to simply simply call the Julia error() function with a message stating the name of the Amos function and the value of its nonzero error return code. IMO doing more than this is entering the realm of error handling, which we would like to avoid at this time.

Does this sound OK to everyone?
If yes, would it be best to close this PR and open a new one, to make it easier for reviewers to compare the final proposed changes to the latest development version of Julia?

@ViralBShah
Copy link
Member

I think this is a good strategy to take this PR to completion. Instead of error, we could have AmosException and return the same error codes as AMOS does. See the exceptions in base/linalg/lapack.jl.

@simonp0420
Copy link
Contributor Author

@ViralBShah Thanks for the tip! AmosException sounds perfect. Didn't realize that it was so simple in Julia to add a new Exception type. I'll try to get started on this tonight.

@simonp0420
Copy link
Contributor Author

Not sure if a comment here is necessary, but in case it is: I've implemented the changes suggested by @ViralBShah.

@jiahao
Copy link
Member

jiahao commented Jan 14, 2014

Looks fine to me; thanks for your patience :)

@simonp0420
Copy link
Contributor Author

Just checking: Is there anything else I need to do to finalize this submission (e.g. do I need to convert this somehow to an RFC from a WIP pull request)?

@kmsquire
Copy link
Member

It would be good to do that anyway if it's ready (just click edit on the
summary pane).

@simonp0420
Copy link
Contributor Author

Done! Thanks for the info.

jiahao added a commit that referenced this pull request Jan 21, 2014
Add error checking to Amos package-based Airy and Bessel methods. Correct # arguments in ccall to :zbiry.
@jiahao jiahao merged commit 870dd27 into JuliaLang:master Jan 21, 2014
@jiahao
Copy link
Member

jiahao commented Jan 21, 2014

Thanks for following through!

@stevengj
Copy link
Member

Please add a NEWS.md item, too. (This thread was so long that I've lost track of what changes were actually made.)

@simonp0420
Copy link
Contributor Author

Thanks to all of you for the support and encouragement. I'm ridiculously excited to have made even a small contribution to Julia :-)

@simonp0420 simonp0420 deleted the amos_error_checking branch January 21, 2014 22:02
@jiahao
Copy link
Member

jiahao commented Jan 21, 2014

NEWS.md updated in 815eda2

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet