Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Scalar indexing in getobs #120

Closed
chriselrod opened this issue Aug 29, 2022 · 3 comments
Closed

Scalar indexing in getobs #120

chriselrod opened this issue Aug 29, 2022 · 3 comments
Labels
not-an-issue This doesn't seem right

Comments

@chriselrod
Copy link

ERROR: Scalar indexing is disallowed.
Invocation of getindex resulted in scalar indexing of a GPU array.
This is typically caused by calling an iterating implementation of a method.
Such implementations *do not* execute on the GPU, but very slowly on the CPU,
and therefore are only permitted from the REPL for prototyping purposes.
If you did intend to index this array, annotate the caller with @allowscalar.
Stacktrace:
  [1] error(s::String)
    @ Base ./error.jl:33
  [2] assertscalar(op::String)
    @ GPUArraysCore ~/.julia/packages/GPUArraysCore/lojQM/src/GPUArraysCore.jl:87
  [3] getindex
    @ ~/.julia/packages/GPUArrays/fqD8z/src/host/indexing.jl:9 [inlined]
  [4] reindex
    @ ./subarray.jl:254 [inlined]
  [5] reindex (repeats 3 times)
    @ ./subarray.jl:250 [inlined]
  [6] getindex
    @ ./subarray.jl:276 [inlined]
  [7] macro expansion
    @ ./multidimensional.jl:867 [inlined]
  [8] macro expansion
    @ ./cartesian.jl:64 [inlined]
  [9] macro expansion
    @ ./multidimensional.jl:862 [inlined]
 [10] _unsafe_getindex!
    @ ./multidimensional.jl:875 [inlined]
 [11] _unsafe_getindex(::IndexCartesian, ::SubArray{Float32, 4, CuArray{Float32, 4, CUDA.Mem.DeviceBuffer}, Tuple{Base.Slice{Base.OneTo{Int64}}, Base.Slice{Base.OneTo{Int64}}, Base.Slice{Base.OneTo{Int64}}, CuArray{Int64, 1, CUDA.Mem.DeviceBuffer}}, false}, ::Base.Slice{Base.OneTo{Int64}}, ::Base.Slice{Base.OneTo{Int64}}, ::Base.Slice{Base.OneTo{Int64}}, ::UnitRange{Int64})
    @ Base ./multidimensional.jl:853
 [12] _getindex
    @ ./multidimensional.jl:839 [inlined]
 [13] getindex
    @ ./abstractarray.jl:1218 [inlined]
 [14] getobs
    @ ~/.julia/packages/MLUtils/OojOS/src/observation.jl:96 [inlined]
 [15] #7
    @ ~/.julia/packages/MLUtils/OojOS/src/observation.jl:136 [inlined]
 [16] map
    @ ./tuple.jl:222 [inlined]
 [17] getobs(tup::Tuple{SubArray{Float32, 4, CuArray{Float32, 4, CUDA.Mem.DeviceBuffer}, Tuple{Base.Slice{Base.OneTo{Int64}}, Base.Slice{Base.OneTo{Int64}}, Base.Slice{Base.OneTo{Int64}}, CuArray{Int64, 1, CUDA.Mem.DeviceBuffer}}, false}, SubArray{Bool, 2, Flux.OneHotArray{UInt32, 10, 1, 2, CuArray{UInt32, 1, CUDA.Mem.DeviceBuffer}}, Tuple{Base.Slice{Base.OneTo{Int64}}, Vector{Int64}}, false}}, indices::UnitRange{Int64})
    @ MLUtils ~/.julia/packages/MLUtils/OojOS/src/observation.jl:136
 [18] getobs(A::MLUtils.BatchView{Tuple{SubArray{Float32, 4, CuArray{Float32, 4, CUDA.Mem.DeviceBuffer}, Tuple{Base.Slice{Base.OneTo{Int64}}, Base.Slice{Base.OneTo{Int64}}, Base.Slice{Base.OneTo{Int64}}, CuArray{Int64, 1, CUDA.Mem.DeviceBuffer}}, false}, SubArray{Bool, 2, Flux.OneHotArray{UInt32, 10, 1, 2, CuArray{UInt32, 1, CUDA.Mem.DeviceBuffer}}, Tuple{Base.Slice{Base.OneTo{Int64}}, Vector{Int64}}, false}}, Tuple{SubArray{Float32, 4, CuArray{Float32, 4, CUDA.Mem.DeviceBuffer}, Tuple{Base.Slice{Base.OneTo{Int64}}, Base.Slice{Base.OneTo{Int64}}, Base.Slice{Base.OneTo{Int64}}, CuArray{Int64, 1, CUDA.Mem.DeviceBuffer}}, false}, SubArray{Bool, 2, Flux.OneHotArray{UInt32, 10, 1, 2, CuArray{UInt32, 1, CUDA.Mem.DeviceBuffer}}, Tuple{Base.Slice{Base.OneTo{Int64}}, Vector{Int64}}, false}}}, i::Int64)
    @ MLUtils ~/.julia/packages/MLUtils/OojOS/src/batchview.jl:105

When I ran my code 6 months ago or so, this worked without any scalar indexing, so this appears to be a regression.

@jowch
Copy link

jowch commented Aug 31, 2022

Seems like this happens when shuffle = true when using GPU arrays.

x = rand(Float32, 10)
y = rand(Float32, 10)

loader = DataLoader(Popt.metal((x, y)); batchsize = 1)
first(loader)  # => OK

loader = DataLoader(Popt.metal((x, y)); batchsize = 1, shuffle = true)
first(loader)  # => ERROR

@darsnack
Copy link
Member

This appears to be the exact same issue as FluxML/Flux.jl#1935 which was fixed by #73 (and the tests added from that PR pass). I also tried reproducing the issue, and I could not:

julia> using JLArrays

julia> JLArrays.allowscalar(false)

julia> x = convert(JLArray, rand(Float32, 3, 4))
3×4 JLArray{Float32, 2}:
 0.734856  0.302647  0.0874875  0.85691
 0.388475  0.916865  0.720555   0.273599
 0.600191  0.806297  0.826627   0.988175

julia> y = convert(JLArray, rand(Float32, 2, 4))
2×4 JLArray{Float32, 2}:
 0.221076  0.757612  0.0133287  0.0798213
 0.369356  0.8213    0.950042   0.130221

julia> view(x, :, 1:3)[:, 1:2]
ERROR: Scalar indexing is disallowed.
[...]

julia> d = DataLoader((x, y); batchsize = 1, shuffle = true)
DataLoader{Tuple{JLArray{Float32, 2}, JLArray{Float32, 2}}, Random._GLOBAL_RNG, Val{nothing}}((Float32[0.73485607 0.3026467 0.08748752 0.8569099; 0.38847476 0.9168652 0.7205551 0.27359885; 0.60019106 0.8062971 0.82662725 0.98817503], Float32[0.22107565 0.75761235 0.013328671 0.07982129; 0.36935622 0.8213003 0.9500422 0.13022065]), 1, false, true, true, false, Val{nothing}(), Random._GLOBAL_RNG())

julia> first(d)
(Float32[0.08748752; 0.7205551; 0.82662725;;], Float32[0.013328671; 0.9500422;;])

julia> length(collect(d))
4

Are you both sure you are running the latest MLUtils.jl?

@darsnack darsnack added the not-an-issue This doesn't seem right label Aug 31, 2022
@jowch
Copy link

jowch commented Aug 31, 2022

For me, Pluto somehow installed v0.2.1 instead of the latest version even though I made the notebook within the last month. Manually upgrading seems to fix the issue as expected. Hopefully, this is also the case for @chriselrod.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
not-an-issue This doesn't seem right
Projects
None yet
Development

No branches or pull requests

4 participants