Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make starcluster AMIs that allow you to launch the new GPU instances #9

Closed
treelinelabs opened this issue Nov 16, 2010 · 5 comments
Closed

Comments

@treelinelabs
Copy link

No description provided.

@jtriley
Copy link
Owner

jtriley commented Nov 16, 2010

this is in progress. basically just need to install nvidia drivers/cuda/pycuda/etc

@jtriley
Copy link
Owner

jtriley commented Dec 14, 2010

made the ami, need to release new version so that starcluster can actually launch clusters of gpu/cluster compute types. 0.91.2 version is not compatible with new cluster compute/gpu types yet but github code is

@apatil
Copy link

apatil commented Dec 16, 2010

Would you be able to share the ami ID before the new release?

@jtriley
Copy link
Owner

jtriley commented Dec 16, 2010

yes, enough folks are requesting it. going to give it one more run through and clean things up and then I'll make it public. hopefully tonight or tomorrow. i'll post the AMI here and to the starcluster/pycuda lists when it's ready.

@jtriley
Copy link
Owner

jtriley commented Dec 21, 2010

Here it is folks: ami-12b6477b

This AMI contains the following GPU software in addition to the usual StarCluster stack:

  • NVIDIA Driver 260.19.21
  • NVIDIA Cuda Toolkit 3.2 (cublas, cufft, curand)
  • PyCuda and PyOpenCL (recent git checkouts)
  • MAGMA 1.0-rc2

This AMI is currently not compatible with StarCluster 0.91.2, however, if you just want to play around with the new GPU instances you're probably better off launching a single instance from the AWS management console. If you need a GPU cluster the latest github code does work with this new AMI and instance type (both cg1.4xlarge and cc1.4xlarge) if you're interested in testing.

A few notes:

  1. CUDA is installed in /usr/local/cuda
  2. MAGMA (shared) library is installed in /usr/local/magma
  3. Custom python2.6 installation in /usr/lib64/python2.6/site-packages
  4. NumPy/SciPy/PyCuda/PyOpenCL/etc are installed in the custom python2.6 installation
  5. All software sources used are in /usr/local/src (also look here for PyCuda/PyOpenCL/MAGMA examples, etc)

Let me know if you have issues...

This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants