Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failing vector-add test (Linux amd64 CUDA 5) #1

Closed
wvxvw opened this issue Aug 15, 2013 · 7 comments
Closed

Failing vector-add test (Linux amd64 CUDA 5) #1

wvxvw opened this issue Aug 15, 2013 · 7 comments

Comments

@wvxvw
Copy link

wvxvw commented Aug 15, 2013

I'm failing this test on FC17

uname -r
3.9.10-100.fc17.x86_64
nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2012 NVIDIA Corporation
Built on Fri_Sep_21_17:28:58_PDT_2012
Cuda compilation tools, release 5.0, V0.2.1221
sbcl --version
SBCL 1.0.57-1.fc17

The error I receive is: CUDA_ERROR_LAUNCH_FAILED, which is, afaik, a generic error if "something" went wrong.

WARNING: This may not be a bug, in fact, this may be a misconfiguration on my side, however, I'd appreciate if you could tell me what else to check.

This is the output from the test:

VECTOR-ADD> (main)
CU-INIT succeeded.
CU-DEVICE-GET succeeded.
CU-CTX-CREATE succeeded.
CU-MEM-ALLOC succeeded.
CU-MEM-ALLOC succeeded.
CU-MEM-ALLOC succeeded.
CU-MEMCPY-HOST-TO-DEVICE succeeded.
CU-MEMCPY-HOST-TO-DEVICE succeeded.
nvcc -arch=sm_11 -I /home/wvxvw/quicklisp/local-projects/cl-cuda/include -ptx -o /tmp/cl-cuda-sBBXlw.ptx /tmp/cl-cuda-sBBXlw.cu
CU-MODULE-LOAD succeeded.
CU-MODULE-GET-FUNCTION succeeded.
CU-LAUNCH-KERNEL succeeded.
; Evaluation aborted on #<SIMPLE-ERROR "~A failed with driver API error No. ~A.~%~A" {1003FEF573}>.
@takagi
Copy link
Owner

takagi commented Aug 16, 2013

Thanks for your reporting!

I had the same error as you receive when I used cl-cuda on Amazon EC2 environment listed the "Verification environment" section of README.markdown.

When I received the error, it was caused by command line options passed to nvcc command. As default, cl-cuda passes "-arch=sm_11" option to nvcc command to control PTX module generation.

Relating options mey be "--gpu-architecture (-arch)" and "--gpu-architecture (-m)". Please try to specify appropriate options for your environment. In my case, "-m32" option was needed inspite of 64bit OS and SBCL.

You can specify options to be passed to nvcc command through cl-cuda's *nvcc-options* exported special variable. Please setf a list of strings which mean options you want to pass.

Example:

(setf *nvcc-options* (list "-arch=sm_20" "-m32"))

@wvxvw
Copy link
Author

wvxvw commented Aug 16, 2013

After some poking around I could get it to pass the test (only this one, didn't try the rest yet), I've updated to FC18 and to CUDA 5.5 (installed from NVidia's repo, but the drivers are still from rpmfusion), that is while specifying the nvcc options, as you mentioned.

Thanks for assistance! Will see if I can get any further with it :)

@takagi
Copy link
Owner

takagi commented Aug 17, 2013

I'm grad to hear that. :)

Would you mind if I asked you to make me show your working environment on "Verification environment" section of README.markdown?

  • OS as uname -r
  • CUDA version as nvcc --version
  • SBCL version as sbcl --version
  • GPU card
  • nvcc options
  • Working examples: vectr-add, ...

@wvxvw
Copy link
Author

wvxvw commented Aug 17, 2013

Yup, sure, here's the output:

$ uname -r
3.10.6-100.fc18.x86_64
$ sbcl --version
SBCL 1.1.2-1.fc18
$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2013 NVIDIA Corporation
Built on Wed_Jul_17_18:36:13_PDT_2013
Cuda compilation tools, release 5.5, V5.5.0
$ cat /proc/driver/nvidia/version 
NVRM version: NVIDIA UNIX x86_64 Kernel Module  319.32  Wed Jun 19 15:51:20 PDT 2013
GCC version:  gcc version 4.7.2 20121109 (Red Hat 4.7.2-8) (GCC) 
$ cat /proc/driver/nvidia/gpus/0/information 
Model:       GeForce GTX 560M
IRQ:         16
GPU UUID:    GPU-84450b52-eee8-d39b-1a61-39449ea0aac4
Video BIOS:      70.26.29.00.0d
Bus Type:    PCIe
DMA Size:    40 bits
DMA Mask:    0xffffffffff
Bus Location:    0000:01.00.0

Running test from SLIME:

(setf *nvcc-options* (list "-arch=sm_20" "-m32"))
;; ("-arch=sm_20" "-m32")
(main)
CU-INIT succeeded.
CU-DEVICE-GET succeeded.
CU-CTX-CREATE succeeded.
CU-MEM-ALLOC succeeded.
CU-MEM-ALLOC succeeded.
CU-MEM-ALLOC succeeded.
CU-MEMCPY-HOST-TO-DEVICE succeeded.
CU-MEMCPY-HOST-TO-DEVICE succeeded.
nvcc -arch=sm_20 -m32 -I /home/wvxvw/quicklisp/local-projects/cl-cuda/include -ptx -o /tmp/cl-cuda-KHiKWd.ptx /tmp/cl-cuda-KHiKWd.cu
CU-MODULE-LOAD succeeded.
CU-MODULE-GET-FUNCTION succeeded.
CU-LAUNCH-KERNEL succeeded.
CU-MEMCPY-DEVICE-TO-HOST succeeded.
verification succeed.
CU-MEM-FREE succeeded.
CU-MEM-FREE succeeded.
CU-MEM-FREE succeeded.
CU-MODULE-UNLOAD succeeded.
CU-CTX-DESTROY succeeded.
NIL

It might be worth noting for whoever wants to reproduce my settings that I did not use video drivers that are included in cuda package, instead, I used the ones from rpmfusion

@takagi
Copy link
Owner

takagi commented Aug 19, 2013

Thanks a lot!
I've updated README.markdown, annotating video drivers you use. If you have anything, please point it out.

@takagi takagi closed this as completed Aug 22, 2013
@melisgl
Copy link
Contributor

melisgl commented Dec 11, 2013

I ran into the same problem on 64bit linux. The diff below is a workaround, cffi grovel is the real solution.

diff --git a/src/cl-cuda.lisp b/src/cl-cuda.lisp
index 40b148f..4f92444 100644
--- a/src/cl-cuda.lisp
+++ b/src/cl-cuda.lisp
@@ -75,10 +75,12 @@
(cffi:defctype cu-module :pointer)
(cffi:defctype cu-function :pointer)
(cffi:defctype cu-stream :pointer)
-(cffi:defctype cu-device-ptr :unsigned-int)
+;;; FIXME: Works on 64 bit linux, probably doesn't on 64 bit windows.
+;;; Use CFFI grovel instead.
+(cffi:defctype cu-device-ptr :unsigned-long)
(cffi:defctype cu-event :pointer)
(cffi:defctype cu-graphics-resource :pointer)
-(cffi:defctype size-t :unsigned-int)
+(cffi:defctype size-t :unsigned-long)

;;;
diff --git a/t/test-cl-cuda.lisp b/t/test-cl-cuda.lisp
index a0c6937..a332cff 100644
--- a/t/test-cl-cuda.lisp
+++ b/t/test-cl-cuda.lisp
@@ -226,7 +226,9 @@
(cl-cuda::free-memory-block blk))
(is-error (cl-cuda::alloc-memory-block 'void 1024 ) simple-error)
(is-error (cl-cuda::alloc-memory-block 'int* 1024 ) simple-error)

  • (is-error (cl-cuda::alloc-memory-block 'int (* 1024 1024 256)) simple-error)
  • ;; This test seems to rely on the memory available on the gpu.
  • #+nil
  • (is-error (cl-cuda::alloc-memory-block 'int (* 1024 1024 256 )) simple-error)
    (is-error (cl-cuda::alloc-memory-block 'int 0 ) simple-error)
    (is-error (cl-cuda::alloc-memory-block 'int -1 ) type-error)))

@takagi
Copy link
Owner

takagi commented Jan 16, 2014

In CUDA driver API, CUdeviceptr is defined as unsigned int, not pointer.
http://docs.nvidia.com/cuda/cuda-driver-api/group__CUDA__TYPES.html#group__CUDA__TYPES_1g5e264ce2ad6a38761e7e04921ef771de

Why? It is said that a CUdeviceptr is a handle to an allocation in device memory and not an address in device memory.
http://stackoverflow.com/a/18141906/756963
https://devtalk.nvidia.com/default/topic/467742/cudeviceptr-should-be-typdedef-39-d-as-void-instead-of-unsigned-int/

On the other hand, as you say, the definition of size_t depends on its environment and using cffi-grovel is real solution. I open issue #3.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants