Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Comparison operators with AbstractIrrational are GPU incompatible #51058

Open
Red-Portal opened this issue Aug 25, 2023 · 3 comments
Open

Comparison operators with AbstractIrrational are GPU incompatible #51058

Red-Portal opened this issue Aug 25, 2023 · 3 comments
Labels
domain:gpu Affects running Julia on a GPU domain:maths Mathematical functions

Comments

@Red-Portal
Copy link

Red-Portal commented Aug 25, 2023

Hi,

Currently, the comparison operators defined for AbstractIrrational v.s. AbstractFloat is causing problems for GPUs:
The precision of AbstractIrrational is currently matched by invoking Float(x, RoundUp/Down) by default:

julia/base/irrationals.jl

Lines 93 to 104 in 6e2e6d0

<(x::AbstractIrrational, y::Float64) = Float64(x,RoundUp) <= y
<(x::Float64, y::AbstractIrrational) = x <= Float64(y,RoundDown)
<(x::AbstractIrrational, y::Float32) = Float32(x,RoundUp) <= y
<(x::Float32, y::AbstractIrrational) = x <= Float32(y,RoundDown)
<(x::AbstractIrrational, y::Float16) = Float32(x,RoundUp) <= y
<(x::Float16, y::AbstractIrrational) = x <= Float32(y,RoundDown)
<(x::AbstractIrrational, y::BigFloat) = setprecision(precision(y)+32) do
big(x) < y
end
<(x::BigFloat, y::AbstractIrrational) = setprecision(precision(x)+32) do
x < big(y)
end

This internally calls setprecision(BigFloat, p):

@assume_effects :total function (t::Type{T})(x::AbstractIrrational, r::RoundingMode) where T<:Union{Float32,Float64}
setprecision(BigFloat, 256) do
T(BigFloat(x)::BigFloat, r)
end
end

And this depends on libmpfr, which is not supported on the GPU.
This implementation has been causing problems downstream

These issues shouldn't happen when a certain AbstractIrrational's conversion is defined statically by specializing Float(BigFloat).
To fix this, we need to change the behavior of the comparison operators to check whether a specialization Float(BigFloat) exist, and then try to do dynamic precision adjustment.

@brenhinkeller brenhinkeller added domain:maths Mathematical functions domain:gpu Affects running Julia on a GPU labels Sep 1, 2023
@simonbyrne
Copy link
Contributor

I would have thought that @assume_effects :total would allow it to evaluate the result at compile time?

@vchuravy
Copy link
Sponsor Member

The problem may be that it is calling something in an overlayed method table and this gets tainted.

@oscardssmith
Copy link
Member

see #51080

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
domain:gpu Affects running Julia on a GPU domain:maths Mathematical functions
Projects
None yet
Development

No branches or pull requests

5 participants