Switch branches/tags
slurm-2.3.3-1.29chaos slurm-2.3.3-1.28chaos slurm-2.3.3-1.27chaos slurm-2.3.3-1.26chaos slurm-2.3.3-1.25chaos slurm-2.3.3-1.24chaos slurm-2.3.3-1.23chaos slurm-2.3.3-1.22chaos slurm-2.3.3-1.21chaos slurm-2.3.3-1.20chaos slurm-2.3.3-1.19chaos slurm-2.3.3-1.18chaos slurm-2.3.3-1.17chaos slurm-2.3.3-1.16chaos slurm-2.3.3-1.15chaos slurm-2.3.3-1.14chaos slurm-2.3.3-1.13chaos slurm-2.3.3-1.12chaos slurm-2.3.3-1.11chaos slurm-2.3.3-1.10chaos slurm-2.3.3-1.9chaos slurm-2.3.3-1.8chaos slurm-2.3.3-1.7chaos slurm-2.3.3-1.6chaos slurm-2.3.3-1.5chaos slurm-2.3.3-1.4chaos slurm-2.3.3-1.3chaos slurm-2.3.3-1.2chaos slurm-2.3.3-1.1chaos slurm-2.3.2-1.2chaos slurm-2.3.2-1.1chaos slurm-2.3.0-2.2chaos slurm-2.3.0-2.1chaos slurm-2-4-0-0-pre2 slurm-2-4-0-0-pre1 slurm-2-3-2-1 slurm-2-3-1-1 slurm-2-3-0-2 slurm-2-3-0-1 slurm-2-3-0-0-rc2 slurm-2-3-0-0-rc1 slurm-2-3-0-0-pre6 slurm-2-3-0-0-pre5 slurm-2-3-0-0-pre4 slurm-2-3-0-0-pre3 slurm-2-3-0-0-pre2 slurm-2-3-0-0-pre1@22142 slurm-2-3-0-0-pre1 slurm-2-2-7-1 slurm-2-2-6-1 slurm-2-2-5-1 slurm-2-2-4-1 slurm-2-2-3-1 slurm-2-2-2-1 slurm-2-2-1-2 slurm-2-2-1-1 slurm-2-2-0-1 slurm-2-2-0-0-rc4 slurm-2-2-0-0-rc3 slurm-2-2-0-0-rc2 slurm-2-2-0-0-rc1 slurm-2-2-0-0-pre12 slurm-2-2-0-0-pre11 slurm-2-2-0-0-pre10 slurm-2-2-0-0-pre9 slurm-2-2-0-0-pre8 slurm-2-2-0-0-pre7 slurm-2-2-0-0-pre6 slurm-2-2-0-0-pre5 slurm-2-2-0-0-pre4 slurm-2-2-0-0-pre3 slurm-2-2-0-0-pre2 slurm-2-2-0-0-pre1 slurm-2-1-17-1 slurm-2-1-16-1 slurm-2-1-15-2 slurm-2-1-15-1 slurm-2-1-14-1 slurm-2-1-13-1 slurm-2-1-12-2 slurm-2-1-12-1 slurm-2-1-11-2 slurm-2-1-11-1 slurm-2-1-10-1 slurm-2-1-9-1 slurm-2-1-8-2 slurm-2-1-8-1 slurm-2-1-7-1 slurm-2-1-6-2 slurm-2-1-6-1 slurm-2-1-5-1 slurm-2-1-5-0-pre1 slurm-2-1-4-1 slurm-2-1-3-2 slurm-2-1-3-1 slurm-2-1-2-1 slurm-2-1-1-1 slurm-2-1-0-1 slurm-2-1-0-0-pre9 slurm-2-1-0-0-pre8
Nothing to show
Find file Copy path
Fetching contributors…
Cannot retrieve contributors at this time
201 lines (181 sloc) 8.04 KB
This information is meant primarily for the Slurm developers.
System administrators should read the instructions at
(also found in the file doc/html/quickstart_admin.shtml).
The "INSTALL" file contains generic Linux build instructions.
Simple build/install on Linux:
./configure --enable-debug \
--prefix=<install-dir> --sysconfdir=<config-dir>
make install
To build the files in the contribs directory:
make contrib
make install-contrib
(The RPMs are built by default)
If you make changes to any auxdir/* or file, then run
_snowflake_ (where there are recent versions of autoconf, automake
and libtool installed):
then check-in the new and files
Here is a step-by-step HOWTO for creating a new release of SLURM on a
Linux cluster (See BlueGene and AIX specific notes below for some differences).
0. svn co slurm
svn co buildfarm
put the buildfarm directory in your search path
1. Update NEWS and META files for the new release. In the META file,
the API, Major, Minor, Micro, Version, and Release fields must all
The Release field should always be 1 unless one of
the following is true
- Changes were made to the spec file, documentation, or example
files, but not to code.
- this is a prerelease (Release = 0.preX)
2. Tag the repository with the appropriate name for the new version.
svn copy \ \
-m "description"
3. Use the rpm make target to create the new RPMs. This requires a .rpmmacros
(.rpmrc for newer versions of rpmbuild) file containing:
%_slurm_sysconfdir /etc/slurm
%_with_debug 1
%_with_sgijob 1
I usually build with using the following syntax:
build -s
4. Remove the RPMs that we don't want:
rm -f slurm-perlapi*rpm slurm-torque*rpm
5. Move the RPMs to
/usr/local/admin/rpms/llnl/RPMS-RHEL4/x86_64 (odevi, or gauss)
/usr/local/admin/rpms/llnl/RPMS-RHEL4/i386/ (adevi)
/usr/local/admin/rpms/llnl/RPMS-RHEL4/ia64/ (tdevi)
send an announcement email (with the latest entry from the NEWS
file) out to
6. Copy tagged bzip file (e.g. slurm-0.6.0-0.pre3.bz2) to FTP server
for external SLURM users.
7. Copy bzip file and rpms (including src.rpm) to
cd upload
put filename
Use SourceForge admin tool to add new release, including changelog.
BlueGene build notes:
3. Use the rpm make target to create the new RPMs. This requires a .rpmmacros
(.rpmrc for newer versions of rpmbuild) file containing:
%_prefix /usr
%_slurm_sysconfdir /etc/slurm
%_with_bluegene 1
%_without_pam 1
%_with_debug 1
Build on Service Node with using the following syntax
rpmbuild -ta slurm-...bz2
The RPM files get written to the directory
To build and run on AIX:
0. svn co slurm
svn co buildfarm
Put the buildfarm directory in your search path
Also, you will need several commands to appear FIRST in your PATH:
I do this by making symlinks to those commands in the buildfarm directory,
then making the buildfarm directory the first one in my PATH.
Also, make certain that the "proctrack" rpm is installed.
1. export OBJECT_MODE=32
2. Build with:
./configure --enable-debug --prefix=/opt/freeware \
--sysconfdir=/opt/freeware/etc/slurm \
--with-ssl=/opt/freeware --with-munge=/opt/freeware \
make uninstall # remove old shared libraries, aix caches them
make install
3. To build RPMs (NOTE: GNU tools early in PATH as described above in #0):
Create a .rpmmacros file specifying system specific files:
# RPM Macros for use with SLURM on AIX
# The system-wide macros for RPM are in /usr/lib/rpm/macros
# and this overrides a few of them
%_prefix /opt/freeware
%_slurm_sysconfdir %{_prefix}/etc/slurm
%_defaultdocdir %{_prefix}/doc
%_with_debug 1
%_with_aix 1
%with_ssl "--with-ssl=/opt/freeware"
%with_munge "--with-munge=/opt/freeware"
%with_proctrack "--with-proctrack=/opt/freeware"
Log in to the machine "uP". uP is currently the lowest-common-denominator
AIX machine.
CC=/usr/bin/gcc build -s
4. export MP_RMLIB=./
export CHECKPOINT=yes
5. poe hostname -rmpool debug
6. To debug, set SLURM_LL_API_DEBUG=3 before running poe - will create a file
It can also be helpful to use poe options "-ilevel 6 -pmdlog yes"
There will be a log file create named /tmp/mplog.<jobid>.<taskid>
7. If you update proctrack, be sure to run "slibclean" to clear cached
8. Remove the RPMs that we don't want:
rm -f slurm-perlapi*rpm slurm-torque*rpm
and install the other RPMs into /usr/admin/inst.images/slurm/aix5.3 on an
OCF AIX machine (pdev is a good choice).
AIX/Federation switch window problems
To clean switch windows: ntblclean =w 8 -a sni0
To get switch window status: ntblstatus
BlueGene bglblock boot problem diagnosis
- Logon to the Service Node (bglsn, ubglsn)
- Execute /admin/bglscripts/fatalras
This will produce a list of failures including Rack and Midplane number
<date> R<rack> M<midplane> <failure details>
- Translate the Rack and Midplane to SLURM node id: smap -R r<rack><midplane>
- Drain only the bad SLURM node, return others to service using scontrol
Configuration file update procedures:
- cd /usr/bgl/dist/slurm (on bgli)
- co -l <filename>
- vi <filename>
- ci -u <filename>
- make install
- then run "dist_local slurm" on SN and FENs to update /etc/slurm
Some RPM commands:
rpm -qa | grep slurm (determine what is installed)
rpm -qpl slurm-1.1.9-1.rpm (check contents of an rpm)
rpm -e slurm-1.1.8-1 (erase an rpm)
rpm --upgrade slurm-1.1.9-1.rpm (replace existing rpm with new version)
rpm -i --ignoresize slurm-1.1.9-1.rpm (install a new rpm)
For main SLURM plugin installation on BGL service node:
rpm -i --force --nodeps --ignoresize slurm-1.1.9-1.rpm
rpm -U --force --nodeps --ignoresize slurm-1.1.9-1.rpm (upgrade option)
To clear a wedged job:
> delete bgljob ####
> free RMP###
Starting and stopping daemons on Linux:
/etc/init.d/slurm stop
/etc/init.d/slurm start
- cd to the top level src directory
- Run the patch command with epilog_complete.patch as stdin:
patch -p[path_level_to_filter] [--dry-run] < epilog_complete.patch
To get the process and job IDs with proctrack/sgi_job:
- jstat -p
CVS and gnats:
Include "gnats:<id> e.g. "(gnats:123)" as part of cvs commit to
automatically record that update in gnats database. NOTE: Does
not change gnats bug state, but records source files associated
with the bug.
For memory leaks (for AIX use zerofault, zf; for linux use valgrind)
- run configure with the option --enable-memory-leak-debug
- valgrind --tool=memcheck --leak-check=yes --num-callers=6 --leak-resolution=med \
./slurmctld -Dc >ctld.out 2>&1 (or similar like for slurmd)
Before new major release:
- Test on ia64, i386, x86_64, BGL, AIX, OSX, XCPU
- Test on Elan and IB switches
- Test fail-over of slurmctld
- Test for memory leaks in slurmctld, slurmd and slurmdbd with various plugins
- Change API version number
- Review and release web pages
- Review and release code
- Run "make check"
- Test that the prolog and epilog run
- Run the test suite with SlurmUser NOT being self