-
-
Notifications
You must be signed in to change notification settings - Fork 596
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ERROR: LoadError: UndefVarError: gpu not defined #246
Comments
if I checkout
Then I can't compile Flux, neither on the named versions as on the
|
Removing Flux manually ( My CUDA version:
CPU: julia> Pkg.checkout("NNlib")
INFO: Checking out NNlib master...
INFO: Pulling NNlib latest master...
INFO: No packages to install, update or remove
julia> Pkg.checkout("Flux")
INFO: Checking out Flux master...
INFO: Pulling Flux latest master...
WARNING: Cannot perform fast-forward merge.
INFO: Installing ZipFile v0.5.0 julia> include("/home/sebastian/develop/julia/flux/model-zoo/mnist/mlp.jl")
INFO: Recompiling stale cache file /home/sebastian/.julia/lib/v0.6/Flux.ji for module Flux.
INFO: Downloading MNIST dataset
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 469 100 469 0 0 671 0 --:--:-- --:--:-- --:--:-- 672
100 9680k 100 9680k 0 0 832k 0 0:00:11 0:00:11 --:--:-- 965k
INFO: Downloading MNIST dataset
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 469 100 469 0 0 974 0 --:--:-- --:--:-- --:--:-- 973
100 28881 100 28881 0 0 25492 0 0:00:01 0:00:01 --:--:-- 25492
INFO: Downloading MNIST dataset
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 467 100 467 0 0 1013 0 --:--:-- --:--:-- --:--:-- 1015
100 1610k 100 1610k 0 0 692k 0 0:00:02 0:00:02 --:--:-- 1221k
INFO: Downloading MNIST dataset
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 467 100 467 0 0 800 0 --:--:-- --:--:-- --:--:-- 799
100 4542 100 4542 0 0 4125 0 0:00:01 0:00:01 --:--:-- 4125
loss(X, Y) = 2.422949021950707 (tracked)
loss(X, Y) = 1.5699422626022999 (tracked)
loss(X, Y) = 1.0060714383500833 (tracked)
loss(X, Y) = 0.7262447329417614 (tracked)
loss(X, Y) = 0.5770155296024464 (tracked)
loss(X, Y) = 0.5105802405264036 (tracked)
loss(X, Y) = 0.47058301177687256 (tracked) GPU: with using julia> include("/home/sebastian/develop/julia/flux/model-zoo/mnist/mlp.jl")
INFO: Recompiling stale cache file /home/sebastian/.julia/lib/v0.6/CuArrays.ji for module CuArrays.
WARNING: could not import NNlib.conv2d_grad_x into CUDNN
WARNING: could not import NNlib.conv2d_grad_w into CUDNN
WARNING: could not import NNlib.pool into CUDNN
WARNING: could not import NNlib.pool_grad into CUDNN
ERROR: LoadError: CUDA error: an illegal memory access was encountered (code #700, ERROR_ILLEGAL_ADDRESS)
Stacktrace:
[1] macro expansion at /home/sebastian/.julia/v0.6/CUDAdrv/src/base.jl:148 [inlined]
[2] CUDAdrv.CuModule(::String, ::Dict{CUDAdrv.CUjit_option,Any}) at /home/sebastian/.julia/v0.6/CUDAdrv/src/module.jl:35
[3] cufunction(::CUDAdrv.CuDevice, ::Any, ::Any) at /home/sebastian/.julia/v0.6/CUDAnative/src/jit.jl:488
[4] macro expansion at /home/sebastian/.julia/v0.6/CUDAnative/src/execution.jl:108 [inlined]
[5] _cuda(::Tuple{Int64,Int64}, ::Int64, ::CUDAdrv.CuStream, ::CuArrays.#broadcast_kernel, ::Flux.Tracker.##35#36, ::CUDAnative.CuDeviceArray{Float32,2,CUDAnative.AS.Global}, ::Tuple{Tuple{Bool,Bool}}, ::Tuple{Tuple{Int64,Int64}}, ::CUDAnative.CuDeviceArray{ForwardDiff.Dual{Void,Float32,3},2,CUDAnative.AS.Global}, ::Tuple{}) at /home/sebastian/.julia/v0.6/CUDAnative/src/execution.jl:80
[6] _broadcast! at /home/sebastian/.julia/v0.6/CuArrays/src/broadcast.jl:22 [inlined]
[7] broadcast_t at /home/sebastian/.julia/v0.6/CuArrays/src/broadcast.jl:37 [inlined]
[8] broadcast_c at /home/sebastian/.julia/v0.6/CuArrays/src/broadcast.jl:58 [inlined]
[9] broadcast at ./broadcast.jl:455 [inlined]
[10] map(::Function, ::CuArray{ForwardDiff.Dual{Void,Float32,3},2}) at /home/sebastian/.julia/v0.6/CuArrays/src/utils.jl:62
[11] (::Flux.Tracker.Broadcasted{Flux.##72#73{Base.#log},CuArray{ForwardDiff.Dual{Void,Float32,3},2}})() at /home/sebastian/.julia/v0.6/Flux/src/tracker/array.jl:287
[12] tracked_broadcast(::Function, ::Flux.OneHotMatrix{CuArray{Flux.OneHotVector,1}}, ::TrackedArray{…,CuArray{Float32,2}}, ::Int64) at /home/sebastian/.julia/v0.6/Flux/src/tracker/array.jl:298
[13] macro expansion at /home/sebastian/.julia/v0.6/NNlib/src/cubroadcast.jl:36 [inlined]
[14] #crossentropy#71(::Int64, ::Function, ::TrackedArray{…,CuArray{Float32,2}}, ::Flux.OneHotMatrix{CuArray{Flux.OneHotVector,1}}) at /home/sebastian/.julia/v0.6/Flux/src/layers/stateless.jl:8
[15] crossentropy(::TrackedArray{…,CuArray{Float32,2}}, ::Flux.OneHotMatrix{CuArray{Flux.OneHotVector,1}}) at /home/sebastian/.julia/v0.6/Flux/src/layers/stateless.jl:8
[16] loss(::CuArray{Float32,2}, ::Flux.OneHotMatrix{CuArray{Flux.OneHotVector,1}}) at /home/sebastian/develop/julia/flux/model-zoo/mnist/mlp.jl:21
[17] #train!#130(::Flux.#throttled#14, ::Function, ::Function, ::Base.Iterators.Take{Base.Iterators.Repeated{Tuple{CuArray{Float32,2},Flux.OneHotMatrix{CuArray{Flux.OneHotVector,1}}}}}, ::Flux.Optimise.##71#75) at /home/sebastian/.julia/v0.6/Flux/src/optimise/train.jl:39
[18] (::Flux.Optimise.#kw##train!)(::Array{Any,1}, ::Flux.Optimise.#train!, ::Function, ::Base.Iterators.Take{Base.Iterators.Repeated{Tuple{CuArray{Float32,2},Flux.OneHotMatrix{CuArray{Flux.OneHotVector,1}}}}}, ::Function) at ./<missing>:0
[19] include_from_node1(::String) at ./loading.jl:576
[20] include(::String) at ./sysimg.jl:14
while loading /home/sebastian/develop/julia/flux/model-zoo/mnist/mlp.jl, in expression starting on line 29 |
You might have some dependencies that are holding back upgrades? Does
|
I followed your advice to start from scratch and noticed that LLVM was still complaining. Thus I've rebuild julia v0.6.2 locally and started with a fresh Is there any possibility to know which package causes to hold back others? Now the tests are passing, but julia> Pkg.status("Flux")
- Flux 0.5.1
julia> Pkg.status("CuArrays")
- CuArrays 0.5.0
julia> Pkg.test("CuArrays")
INFO: Computing test dependencies for CuArrays...
INFO: Installing FFTW v0.0.4
INFO: Building FFTW
INFO: Testing CuArrays
INFO: Testing using device GeForce 940M
INFO: Testing CuArrays/CUDNN
Test Summary: | Pass Total
CuArrays | 676 676
INFO: CuArrays tests passed
INFO: Removing FFTW v0.0.4
julia> Pkg.test("Flux")
INFO: Testing Flux
...
INFO: Testing Flux/GPU
INFO: Testing Flux/CUDNN
Test Summary: | Pass Total
Flux | 172 172
INFO: Flux tests passed unfortunately it is still not working:
julia> include("/home/sebastian/develop/julia/flux/model-zoo/mnist/mlp.jl")
ERROR: LoadError: Broadcast output type Any is not concrete
Stacktrace:
[1] broadcast_t at /home/sebastian/.julia/v0.6/CuArrays/src/broadcast.jl:34 [inlined]
[2] broadcast_c at /home/sebastian/.julia/v0.6/CuArrays/src/broadcast.jl:63 [inlined]
[3] broadcast at ./broadcast.jl:455 [inlined]
[4] tracked_broadcast(::Function, ::Flux.OneHotMatrix{CuArray{Flux.OneHotVector,1}}, ::TrackedArray{…,CuArray{Float32,2}}, ::Int64) at /home/sebastian/.julia/v0.6/Flux/src/tracker/array.jl:278
[5] #crossentropy#71(::Int64, ::Function, ::TrackedArray{…,CuArray{Float32,2}}, ::Flux.OneHotMatrix{CuArray{Flux.OneHotVector,1}}) at /home/sebastian/.julia/v0.6/Flux/src/layers/stateless.jl:8
[6] crossentropy(::TrackedArray{…,CuArray{Float32,2}}, ::Flux.OneHotMatrix{CuArray{Flux.OneHotVector,1}}) at /home/sebastian/.julia/v0.6/Flux/src/layers/stateless.jl:8
[7] loss(::CuArray{Float32,2}, ::Flux.OneHotMatrix{CuArray{Flux.OneHotVector,1}}) at /home/sebastian/develop/julia/flux/model-zoo/mnist/mlp.jl:21
[8] #train!#130(::Flux.#throttled#14, ::Function, ::Function, ::Base.Iterators.Take{Base.Iterators.Repeated{Tuple{CuArray{Float32,2},Flux.OneHotMatrix{CuArray{Flux.OneHotVector,1}}}}}, ::Flux.Optimise.##71#75) at /home/sebastian/.julia/v0.6/Flux/src/optimise/train.jl:39
[9] (::Flux.Optimise.#kw##train!)(::Array{Any,1}, ::Flux.Optimise.#train!, ::Function, ::Base.Iterators.Take{Base.Iterators.Repeated{Tuple{CuArray{Float32,2},Flux.OneHotMatrix{CuArray{Flux.OneHotVector,1}}}}}, ::Function) at ./<missing>:0
[10] include_from_node1(::String) at ./loading.jl:576
[11] include(::String) at ./sysimg.jl:14
while loading /home/sebastian/develop/julia/flux/model-zoo/mnist/mlp.jl, in expression starting on line 29 |
Checking out the current master on julia> Pkg.checkout("Flux")
INFO: Checking out Flux master...
INFO: Pulling Flux latest master...
WARNING: Cannot perform fast-forward merge.
INFO: No packages to install, update or remove
julia> Pkg.build("Flux")
INFO: Building SpecialFunctions first added on line X = hcat(float.(reshape.(imgs, :))...) |> gpu
info(typeof(X)) julia> include("model-zoo/mnist/mlp.jl")
INFO: Recompiling stale cache file /home/sebastian/.julia/lib/v0.6/Flux.ji for module Flux.
INFO: CuArray{Float32,2}
loss(X, Y) = 2.3530197f0 (tracked)
loss(X, Y) = 0.6614058f0 (tracked)
loss(X, Y) = 0.41540956f0 (tracked)
loss(X, Y) = 0.32339448f0 (tracked)
loss(X, Y) = 0.2818481f0 (tracked)
0.9237 |
Do I need Julia 0.7 for Flux 0.5? And how to checkout Flux 0.5 with Julia?
When I try to run a current
model-zoo
example I get the following error:$ julia ./mnist/mlp.jl
The text was updated successfully, but these errors were encountered: