Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No generic exp, sin, cos, sinh, cosh #51008

Open
mikmoore opened this issue Aug 22, 2023 · 6 comments
Open

No generic exp, sin, cos, sinh, cosh #51008

mikmoore opened this issue Aug 22, 2023 · 6 comments
Labels
domain:maths Mathematical functions

Comments

@mikmoore
Copy link
Contributor

mikmoore commented Aug 22, 2023

Some basic math functions are missing generic implementations. Among these are exp, sin, cos, sinh, and cosh, although likely there are several others that should be included. From this list, all the functions can be evaluated in terms of exp. Functions like tan and tanh should be re-assessed once these others are fixed. Functions like acos, asin, etc, are a bit more complicated but should probably get a look eventually.

exp(::T) should work for any type for which evalpoly can be made to work, which requires *(::T,::T) and +(::UniformScaling{<:Number},::T) (in place of the UniformScaling method, oneunit(::T), *(::Number,::T), and +(::T,::T) can serve). Accurate results are much easier to achieve if opnorm(::T) is defined, although a different implementation of the polynomial evaluation could continue the Taylor series until convergence (at a potentially-significant computational cost).

For example, these functions cannot be used with matrices of dual numbers or quaternions:

# Julia v1.9.1

julia> using ForwardDiff

julia> ForwardDiff.derivative(t->exp([0 t; 0 0]),1.0)
ERROR: MethodError: no method matching exp!(::Matrix{ForwardDiff.Dual{ForwardDiff.Tag{var"#89#90", Float64}, Float64, 1}})

Closest candidates are:
  exp!(::StridedMatrix{T}) where T<:Union{Float32, Float64, ComplexF64, ComplexF32}
   @ LinearAlgebra ~/julia-1.9.1/share/julia/stdlib/v1.9/LinearAlgebra/src/dense.jl:649

julia> using Quaternions

julia> exp([0 quat(1.0); 0 0])
ERROR: MethodError: no method matching exp!(::Matrix{QuaternionF64})

Closest candidates are:
  exp!(::StridedMatrix{T}) where T<:Union{Float32, Float64, ComplexF64, ComplexF32}
   @ LinearAlgebra ~/julia-1.9.1/share/julia/stdlib/v1.9/LinearAlgebra/src/dense.jl:649

The listed functions do have methods with ::AbstractMatrix signatures, but they fail like here when the elements do not promote to LinearAlgebra.BlasFloat types. In any case, a suitable fallback should work for types other than matrices, too.

It's not critical that these functions be fast, but they should exist and be mostly accurate (to the extent that we can predict what inputs might occur).

@stevengj
Copy link
Member

stevengj commented Jan 2, 2024

It doesn't seem so easy to write a generic implementation that is also accurate and reasonably efficient. What algorithm could be used? (Taylor series don't work here.)

@oscardssmith
Copy link
Member

Once you have exp, the rest mostly follow.

@stevengj
Copy link
Member

stevengj commented Jan 2, 2024

Yes, but computing exp reliably is famously difficult.

@mikmoore
Copy link
Contributor Author

mikmoore commented Jan 2, 2024

First, the AbstractMatrix case:

I think that a slow and slightly inaccurate implementation is more useful than nonexistence. In many cases it's impossible to provide a definition without pirating or patching every package out there. Should every package defining N<:Number define its own Base.exp(::AbstractMatrix{<:N}) method? Even if they should, I doubt they will. And even if they did, in most cases it would just be a replication of the current implementation (or something worse).

I don't see anything conceptually horrifying (though I'm no expert) about applying the algorithm used by exp! more widely. It balances the matrix, scales the argument, uses a Pade approximant, uses repeated squaring to undo the scaling, then un-does the balancing. It's good enough for matrices of ComplexF64, so it's probably an okay start for many other uses. It seems that the main reason it's currently so type-limited is that it's using LAPACK calls and mutation.

The balancing is not strictly required (as far as I understand). However, it could probably be re-implemented generically with some minor or moderate effort. That would not be required for an initial implementation.


As for Number:

I could settle for leaving Base.exp(::Number) et al untouched, since it's more reasonable to insist that a custom Number provide its own implementation.

@oscardssmith
Copy link
Member

I think that probably makes sense. For new number types, the efficient operations and desired accuracy .are hard to know generically. For Matrices, on the other hand, multiplication is a common interface, and the tolerances come down more to how long you want to spend rather than the number type precision.

@stevengj
Copy link
Member

stevengj commented Jan 2, 2024

The exp! method in LinearAlgebra is full of parameters that are specific to LinearAlgebra.BlasFloat precision. I don't think it would be acceptably accurate for higher precisions?

(We could apply the current exp! algorithm to things like Quaternion and Dual numbers when they are using Float64 or Float32 precision, but right now we don't have a way to dispatch on that — I don't think we currently have a way to query the underlying floating-point type for an arbitrary numeric type. BaseType.jl is one attempt at this, though I'm not happy with its current approach, but we'd need something in Base.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
domain:maths Mathematical functions
Projects
None yet
Development

No branches or pull requests

4 participants