WARP scripts -- for controlling jobs on CMU's WARP cluster
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Failed to load latest commit information.
old update things Nov 16, 2011
.gitignore ignore the file which is inside video-adapt now Nov 3, 2011
README.md update more information Nov 16, 2011
animrc go with tacky green when running screen inside a screen. kids dont tr… Sep 24, 2012
kill_matlabs.sh script to kill all MATLABs on cluster Oct 5, 2012
machine_list.sh-example update note Nov 16, 2011
toucher.sh Fix the ability to use screenlogs Dec 27, 2011
warp_driver.sh First git commit of my cluster scripts Aug 1, 2011
warp_killer.sh First git commit of my cluster scripts Aug 1, 2011
warp_starter.sh First git commit of my cluster scripts Aug 1, 2011


Here are my WARP scripts. WARP is CMU's cluster owned by the Graphics Group -- you need a special account (beyond SCS) to be able to access this computational resource.

To check the status of the WARP you must be on CMU's network and you can go to the webpage: http://warp.hpc1.cs.cmu.edu/wordpress/

To log into warp, you need to ssh into warp.hpc1.cs.cmu.edu (please see the Graphics group at CMU first)

First edit your home directory which the script automatically cd's into:

nano warp_driver.sh

Next make sure the LOGDIR is set to something reasonable (perhaps you're not on CMU's lustre)

nano warp_starter.sh

To start a job:


To list running jobs:


To kill jobs of name JOB_NAME

./warp_killer.sh JOB_NAME

to check jobs:


Older screen-based parallel job running library

You can also invoke a parallel job using sc.sh, which will launch a special GNU screen session where each tab will be SSH-ed into a single machine. Please note that this is for old-style SSH-based clusters, not ones based on torque. If you run sc.sh on a cluster such as CMU's warp people will be unhappy. If I was an admin, I would probably ban you.

First set up a list of machines you can SSH into which into

cp machine_list.sh-example machine_list.sh
nano machine_list.sh

To start a job:

./sc.sh JOB_NAME