Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Making a new CUDA 3 compatible release? #662

Closed
jonathan-laurent opened this issue Jul 12, 2021 · 7 comments
Closed

Making a new CUDA 3 compatible release? #662

jonathan-laurent opened this issue Jul 12, 2021 · 7 comments

Comments

@jonathan-laurent
Copy link

I see that the latest Knet release (1.4.6) is not compatible with CUDA 3.
The master version seems to work fine with CUDA 3.3.3 and AlphaZero.jl though.

Are there any known issues remaining to be fixed before Knet officially supports CUDA 3?
If not, what about making a new release?

@denizyuret
Copy link
Owner

Started working on this, just fixing a few minor incompatibilities and a possibly major one effecting inplace operations in test/karray.jl. Latest work at the dy/cudnn branch.

@denizyuret
Copy link
Owner

... hmm, and there is an issue of across the board slow-down, which may be related to aforementioned inplace operations or not. This is going to take a bit longer than I thought.

@jonathan-laurent
Copy link
Author

jonathan-laurent commented Jul 22, 2021

I think I observed the slow-down you mentioned. Indeed, Knet used to be 20-30% faster than Flux on my connect four benchmark. However, when I tried Knet#master with CUDA 3.3.0, it was about 20% slower than the latest version of Flux. (You should take these numbers with a grain of salt though as my measurements weren't very rigorous.)

@denizyuret
Copy link
Owner

@maleadt Knet tests with CUDA 3.0 give me some errors that I do not understand, maybe you can point me in the right direction:

First pkg"test Knet" fails with errors of the following type when testing in-place addition:

  Test threw exception                                                                    
  Expression: (a4 .+= a3) == (k4 .+= k3)                                                  
  Scalar indexing is disallowed.                                                          
  Invocation of getindex resulted in scalar indexing of a GPU array.                      
  This is typically caused by calling an iterating implementation of a method.            
  Such implementations *do not* execute on the GPU, but very slowly on the CPU,           
  and therefore are only permitted from the REPL for prototyping purposes.                
  If you did intend to index this array, annotate the caller with @allowscalar.           

However when I include the individual test file with include(Knet.dir("test/karray.jl")), I do not get an error but just a warning and the tests pass:

julia> include(Knet.dir("test/karray.jl"))                                                
┌ Warning: Performing scalar indexing on task Task (runnable) @0x00007f338d034010.        
│ Invocation of getindex resulted in scalar indexing of a GPU array.                      
│ This is typically caused by calling an iterating implementation of a method.            
│ Such implementations *do not* execute on the GPU, but very slowly on the CPU,           
│ and therefore are only permitted from the REPL for prototyping purposes.                
│ If you did intend to index this array, annotate the caller with @allowscalar.           
└ @ GPUArrays ~/.julia/packages/GPUArrays/8dzSJ/src/host/indexing.jl:56                   
Test Summary: | Pass  Total                                                               
karray        |  318    318                                                               

None of this happened pre-CUDA-3.x.

  • Did something change with the default behavior of allowscalar?
  • Do you have any idea why Pkg.test would fail but including the failing test file would pass?
  • Finally, none of these calls should result in scalar indexing in the first place, did anything change with array indexing?
(a4 .+= a3) == (k4 .+= k3)
(a4 .= a3) == (k4 .= k3)
(a4[:] .= a3[:]) == (k4[:] .= k3[:])
(a4[:, :] .= a3) == (k4[:, :] .= k3)

@maleadt
Copy link
Collaborator

maleadt commented Jul 23, 2021

Scalar iteration is now disallowed by default, but allowed with a warning in interactive sessions. This to facilitate debugging. But you can always force it to be off in your interactive session too by calling CUDA.allowscalar(false).

I don't recall changing iteration specifically, but lots has change since pre-3.0. Doesn't the backtrace tell you anything?

@denizyuret
Copy link
Owner

Thanks, that helps. The problem was with the == comparison of Array vs CuArray.

@denizyuret
Copy link
Owner

@jonathan-laurent The CUDA 3 compatible Knet-1.4.7 is passing tests, I will release today. The tests still seem slower but not for any simple reason I could detect. My profiling script Knet/prof/ops20.jl gives similar timings for 1.4.6 and 1.4.7. If you have something that can help me debug performance issues I can work on it for the next release.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants