Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

StackOverflowError on lgamma(::Float16) #131

Closed
albertopessia opened this issue Nov 12, 2018 · 3 comments
Closed

StackOverflowError on lgamma(::Float16) #131

albertopessia opened this issue Nov 12, 2018 · 3 comments

Comments

@albertopessia
Copy link

Assuming Float16 to be a valid type, if I run with Julia 1.0.2

using SpecialFunctions
lgamma(convert(Float16, 2))

I get an unexpected ERROR: StackOverflowError:. If Float16 is instead not supported it should say it clearly.

@ararslan
Copy link
Member

Good catch. There isn't a method defined for Float16, so it's falling back to

@inline lgamma(x::Real) = lgamma(float(x))

which causes a stack overflow because it never ends up calling a more specific method. I see two potential courses of action here:

  1. Define a method for Float16 as something like

    @inline lgamma(x::Float16) = Float16(lgamma(Float32(x)))

    which can then ccall into openspecfun. This is likely inefficient.

  2. Define a no-method method for Float16, e.g.

    lgamma(x::Float16) = throw(MethodError(lgamma, x))

    That ensures that calling lgamma on a Float16 at least fails with an informative error.

@albertopessia
Copy link
Author

I think the second option is the best solution:

  • Users can implement the first option themselves after learning that Float16 is not supported
  • The first option hides the fact that the operation is actually taking place with a Float32. Whatever the reason, it might not be what the user want.

@simonbyrne
Copy link
Member

FWIW, what we generally do for math functions is:

  1. define an AbstractFloat fallback which throws an error
  2. compute Float16 in Float32 precision.

No CPUs yet support Float16 operations, so 2) is probably fine for now. For coprocessors like GPUs, the calls can still be intercepted similar to CuArrays.jl

So this would be:

lgamma(x::AbstractFloat) = throw(MethodError(lgamma, x))
lgamma(x::Float16) = Float16(lgamma(Float32(x)))

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants