You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
using CuArrays
using NNlib
A = cu(rand(Float32, 10, 100))
B = cu(rand(Float32, 10))
CUDAnative.exp.(NNlib.σ.(A .+ B))
gives:
warning: ignoring debug info with an invalid version (0) in #1
warning: ignoring debug info with an invalid version (0) in
ERROR: LLVM error: All DICompileUnits must be listed in llvm.dbg.cu
Stacktrace:
[1] verify(::LLVM.Module) at /home/slipslop/.julia/v0.6/LLVM/src/analysis.jl:11
[2] #add_entry!#26(::Bool, ::Function, ::LLVM.Module, ::Any, ::Any) at /home/slipslop/.julia/v0.6/CUDAnative/src/jit.jl:251
[3] (::CUDAnative.#kw##add_entry!)(::Array{Any,1}, ::CUDAnative.#add_entry!, ::LLVM.Module, ::Any, ::Any) at ./<missing>:0
[4] #compile_function#51(::Bool, ::Function, ::Any, ::Any, ::VersionNumber) at /home/slipslop/.julia/v0.6/CUDAnative/src/jit.jl:402
[5] cufunction(::CUDAdrv.CuDevice, ::Any, ::Any) at /home/slipslop/.julia/v0.6/CUDAnative/src/jit.jl:465
[6] macro expansion at /home/slipslop/.julia/v0.6/CUDAnative/src/execution.jl:108 [inlined]
[7] _cuda(::Tuple{Int64,Int64}, ::Int64, ::CUDAdrv.CuStream, ::CuArrays.#broadcast_kernel, ::##1#2, ::CUDAnative.CuDeviceArray{Float32,2,CUDAnative.AS.Global}, ::Tuple{Tuple{Bool,Bool},Tuple{Bool}}, ::Tuple{Tuple{Int64,Int64},Tuple{Int64}}, ::CUDAnative.CuDeviceArray{Float32,2,CUDAnative.AS.Global}, ::Tuple{CUDAnative.CuDeviceArray{Float32,1,CUDAnative.AS.Global}}) at /home/slipslop/.julia/v0.6/CUDAnative/src/execution.jl:80
[8] _broadcast! at /home/slipslop/.julia/v0.6/CuArrays/src/broadcast.jl:22 [inlined]
[9] broadcast_t at /home/slipslop/.julia/v0.6/CuArrays/src/broadcast.jl:37 [inlined]
[10] broadcast_c at /home/slipslop/.julia/v0.6/CuArrays/src/broadcast.jl:58 [inlined]
[11] broadcast(::Function, ::CuArray{Float32,2}, ::CuArray{Float32,1}) at ./broadcast.jl:455
Most likely it's caused by difference between Base.exp and CUDAnative.exp: σ uses the first one, but it doesn't work for CuArrays. Normally I resolve it by overloading a function (like this for CUDAnative.log), but in NNlib sigmoid is defined for Float32 and broadcast to any array type it's applied to.
Right now I can't think of any way to define σ (and similar functions) so that they work for both - Array and CuArray, ideas are welcome.
I use Julia 0.6.2, CUDAnative 0.5.3 (the last one for 0.6.x Julia) and latest master of CuArrays.
The text was updated successfully, but these errors were encountered:
gives:
Most likely it's caused by difference between
Base.exp
andCUDAnative.exp
:σ
uses the first one, but it doesn't work for CuArrays. Normally I resolve it by overloading a function (like this forCUDAnative.log
), but in NNlib sigmoid is defined forFloat32
and broadcast to any array type it's applied to.Right now I can't think of any way to define
σ
(and similar functions) so that they work for both -Array
andCuArray
, ideas are welcome.I use Julia 0.6.2, CUDAnative 0.5.3 (the last one for 0.6.x Julia) and latest master of CuArrays.
The text was updated successfully, but these errors were encountered: