Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot run same script twice. #60

Closed
decabyte opened this issue Aug 9, 2014 · 7 comments
Closed

Cannot run same script twice. #60

decabyte opened this issue Aug 9, 2014 · 7 comments

Comments

@decabyte
Copy link

decabyte commented Aug 9, 2014

After installing pyopencl and running some example I found this strange behaviour: running twice the same program produces a RuntimeError. For instance running the demo.py from examples folder:

$ python demo.py 
[ 0.  0.  0. ...,  0.  0.  0.]
0.0

$ python demo.py 
Traceback (most recent call last):
  File "demo.py", line 22, in <module>
    """).build()
  File "/usr/local/lib/python2.7/dist-packages/pyopencl-2014.1-py2.7-linux-x86_64.egg/pyopencl/__init__.py", line 213, in build
    options=options, source=self._source)
  File "/usr/local/lib/python2.7/dist-packages/pyopencl-2014.1-py2.7-linux-x86_64.egg/pyopencl/__init__.py", line 253, in _build_and_catch_errors
    raise err
pyopencl.RuntimeError: clBuildProgram failed: invalid program - 

Build on <pyopencl.Device 'Intel HD Graphics Family' on 'Experiment Intel Gen OCL Driver' at 0x7f273f4c2720>:

(options: -I /usr/local/lib/python2.7/dist-packages/pyopencl-2014.1-py2.7-linux-x86_64.egg/pyopencl/cl)
(source saved as /tmp/tmpT6Tv8O.cl)

I tried install the pip version of pyopencl and the git version. In both cases they produce the same errors. Compiling C/C++ code for OpenCL with the installed driver doesn't produce any error.

Here the generated .cl file:

$ cat /tmp/tmpw0tqln.cl

__kernel void sum(__global const float *a_g, __global const float *b_g, __global float *res_g) {
  int gid = get_global_id(0);
  res_g[gid] = a_g[gid] + b_g[gid];
}

However removing the cached output under /tmp the execution resumes without any problems:

$ rm -r /tmp/pyopencl-compiler-cache-v2-uidvalerio-py2.7.6.final.0
$ python demo.py 
[ 0.  0.  0. ...,  0.  0.  0.]
0.0

Any ideas on how to fix this problem? :)

@yuyichao
Copy link
Contributor

yuyichao commented Aug 9, 2014

Can you try a lower version of beignet? e.g. 0.9

@decabyte
Copy link
Author

decabyte commented Aug 9, 2014

Yes, my fault. I didn't installed the right beignet version, with 0.8.1 no problems with the examples. First time without a warming Nvidia under PyOpenCL. :)

It is worth noticing that the version shipped with Ubuntu 14.04 is very old, 0.3.1-1.

Any recommendation for using the right version on different machines? Better to compile beignet from source checking out a specific version everytime or relying on other packages? Maybe the Debian / Ubuntu+1 ones?

$ python benchmark.py 
Execution time of test without OpenCL:  0.0580661296844 s
===============================================================
Platform name: Experiment Intel Gen OCL Driver
Platform profile: FULL_PROFILE
Platform vendor: Intel
Platform version: OpenCL 1.1 beignet 0.8.0
---------------------------------------------------------------
Device name: Intel(R) HD Graphics IvyBridge M GT2
Device type: GPU
Device memory:  128 MB
Device max clock speed: 1000 MHz
Device compute units: 128
Device max work group size: 1024
Device max work item sizes: [512, 512, 512]
Data points: 8388608
Workers: 256
Preferred work group size multiple: 16
Execution time of test: 0.191544 s
Results OK

@decabyte decabyte closed this as completed Aug 9, 2014
@yuyichao
Copy link
Contributor

yuyichao commented Aug 9, 2014

My local beignet version is 3 commit ahead of 0.9.1 and it runs without problems. If you found a commit that breaks pyopencl I think you should report to beignet with the bad commit you are on.

@yuyichao
Copy link
Contributor

yuyichao commented Aug 9, 2014

Also, I'm supprised that beignet <= 0.9.1 works at all from a binary package since it is only recently that beignet can work on a different architecture it is compiled on. (i.e. if you want to run it on IveBridge CPU, beignet had to be compiled on that CPU as well).
I would recommand using the latest git master or at least >= 0.9.2 since it is the first release which use llvm bytecode at compile time.

@decabyte
Copy link
Author

decabyte commented Aug 9, 2014

Thanks @yuyichao I'll compile the latest git master asap. Given the OS release I'll try the LLVM/clang 3.4 stack as it is shipped by default on 14.04.

What about the git mesa local repo? It is a required dependency for using pyopencl and beignet successfully?

@yuyichao
Copy link
Contributor

yuyichao commented Aug 9, 2014

There's a recent thread on a compiling problem on the beignet list and the reply seems to indicate that 10.1 should be fine. I'm not sure what's the officially supported versions but at least git master is not necessary. I have just compiled the latest beignet master with mesa 10.2.5 and it looks OK. I guess you should just test it with whatever mesa version you have and report to beignet list if it fails to figure out whether it is possible to support the version you have (I guess any recent versions should be fine).

@decabyte
Copy link
Author

decabyte commented Aug 9, 2014

Yes, the 0.9.2+ seems to works much better. I've compiled with the latest Intel Graphics stack for Ubuntu 14.04 which brings Mesa 10.2.2. Compilation is fine and the only missing part is the cl_khr_gl_sharing cause I didn't rebuild Mesa.

$ python benchmark.py 
Execution time of test without OpenCL:  0.0574040412903 s
===============================================================
Platform name: Intel Gen OCL Driver
Platform profile: FULL_PROFILE
Platform vendor: Intel
Platform version: OpenCL 1.2 beignet 0.9
---------------------------------------------------------------
Device name: Intel(R) HD Graphics IvyBridge M GT2
Device type: GPU
Device memory:  1024 MB
Device max clock speed: 1000 MHz
Device compute units: 16
Device max work group size: 1024
Device max work item sizes: [1024, 1024, 1024]
Data points: 8388608
Workers: 256
Preferred work group size multiple: 16
Execution time of test: 0.0121039 s
Results OK

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants