howtos install OFED 1.5.x

nadyawilliams edited this page Jun 7, 2014 · 1 revision

A basic HOWTO on installing OpenFabrics on a Rocks 5.3 cluster. Developed by Tim Carlson mailto:tim@pnl.gov . If you find errors, or want to improve the page, just let me know!

Table of Contents

Install head node as normal

Your list or rolls should look something like this

# rocks list roll
NAME                                 VERSION ARCH   ENABLED
Red_Hat_Enterprise_Linux_Client_5.4: 5.2     x86_64 yes
base:                                5.3     x86_64 yes
ganglia:                             5.3     x86_64 yes
hpc:                                 5.3     x86_64 yes
kernel:                              5.3     x86_64 yes
web-server:                          5.3     x86_64 yes

Remove existing MPI/OFED like things on the head node

This is required because the uninstall script that comes with OFED can be unhappy at times.

# rpm -e --allmatches libibverbs librdmacm compat-dapl openmpi-libs openmpi \
rocks-openmpi-1.3.3-1 iscsi-initiator-utils openmpi-devel mpi-tests

Download and build the OFED 1.5.1

This assumes you are just using the GNU compilers. If you root environment has access to the Intel and Portland Group compiler, you should fix up your path so that just the GNU bits get compiled.

# mkdir /root/ofed
# cd /root/ofed
# wget http://www.openfabrics.org/downloads/OFED/ofed-1.5.1/OFED-1.5.1.tgz
# tar zxf OFED-1.5.1.tgz
# cd OFED-1.5.1
# ./install.pl --all --print-available
# grep -v debuginfo ofed-all.conf > ofed.conf
# ./install.pl -c ofed.conf --build32
I am doing this installation on a box that has a Mellanox Infinihost DDR card. At the end of the install process I get this information. If you are building the packages on a head node that does not have an IB card, you won't see any information after the iscsi rpm has been installed.
...
Running rpm -iv  /root/ofed/OFED-1.5.1/RPMS/redhat-release-5Client-5.4.0.3/x86_64/iscsi-initiator-utils-2.0-869.2.x86_64.rpm
Device (15b3:6278):
        0c:00.0 InfiniBand: Mellanox Technologies MT25208 InfiniHost III Ex (Tavor compatibility mode) (rev a0)
        Link Width: 8x
        Link Speed: 2.5Gb/s


Installation finished successfully.

Exclude OFED Packages From Yum

On the head node, it's a good idea to edit the yum.conf file to add a list of packages that should be excluded from updates. If this is not done, a 'yum update' of the head node can result in a non working Infiniband interface.

# vi /etc/yum.conf
[main]
cachedir=/var/cache/yum
debuglevel=2
logfile=/var/log/yum.log
pkgpolicy=newest
distroverpkg=redhat-release
tolerant=1
exactarch=1
obsoletes=1
gpgcheck=1
plugins=1
exclude=kernel* wordpress tentakel compat-dapl compat-dapl-devel compat-dapl-devel-static compat-dapl-utils dapl dapl-devel dapl-devel-static dapl-utils ib-bonding ibsim ibutils infiniband-diags infinipath-psm infinipath-psm-devel kernel-ib kernel-ib-devel libcxgb3 libcxgb3-devel libibcm libibcm-devel libibmad libibmad-devel libibmad-static libibumad libibumad-devel libibumad-static libibverbs libibverbs-devel libibverbs-devel-static libibverbs-utils libipathverbs libipathverbs-devel libmlx4 libmlx4-devel libmthca libmthca-devel-static libnes libnes-devel-static librdmacm librdmacm-devel librdmacm-utils libsdp libsdp-devel mpi-selector mpitests_mvapich2_gcc mpitests_mvapich_gcc mpitests_openmpi_gcc mstflint mvapich2_gcc mvapich_gcc ofed-docs ofed-scripts openmpi_gcc opensm opensm-devel opensm-libs opensm-static perftest qperf rds-tools scsi-target-utils sdpnetstat srptools tgt compat-dapl-static dapl-static

[Rocks-5.3]
name=Rocks 5.3
baseurl=http://localhost/install/rocks-dist/x86_64
priority=1

Make the RPMs available to the compute nodes

All of the RPMs are now in the directory /root/ofed/OFED-1.5.1/RPMS/redhat-release-5Client-5.4.0.3/x86_64 If you are running Centos instead of RHEL, you'll have to change the above to better suit your environment. The RPMs need to be copied to /export/rocks/install/contrib/5.3/x86_64/RPMS/

# cd /root/ofed/OFED-1.5.1/RPMS/redhat-release-5Client-5.4.0.3/x86_64
# cp * /export/rocks/install/contrib/5.3/x86_64/RPMS/
Create and extend-compute.xml file that lists all of these RPMs
# cd /export/rocks/install/site-profiles/5.3/nodes/
# cp skeleton.xml extend-compute.xml
# vi extend-compute.xml
Before the <post></post> section, you will want to add the following packages.
&lt;package&gt;kernel&#45;ib&lt;/package&gt;
&lt;package&gt;kernel&#45;ib&#45;devel&lt;/package&gt;
&lt;package&gt;ib&#45;bonding&lt;/package&gt;
&lt;package&gt;ofed&#45;scripts&lt;/package&gt;
&lt;package&gt;libibverbs&lt;/package&gt;
&lt;package&gt;libibverbs&#45;devel&lt;/package&gt;
&lt;package&gt;libibverbs&#45;devel&#45;static&lt;/package&gt;
&lt;package&gt;libibverbs&#45;utils&lt;/package&gt;
&lt;package&gt;libmthca&lt;/package&gt;
&lt;package&gt;libmthca&#45;devel&#45;static&lt;/package&gt;
&lt;package&gt;libmlx4&lt;/package&gt;
&lt;package&gt;libmlx4&#45;devel&lt;/package&gt;
&lt;package&gt;libcxgb3&lt;/package&gt;
&lt;package&gt;libcxgb3&#45;devel&lt;/package&gt;
&lt;package&gt;libnes&lt;/package&gt;
&lt;package&gt;libnes&#45;devel&#45;static&lt;/package&gt;
&lt;package&gt;libipathverbs&lt;/package&gt;
&lt;package&gt;libibcm&lt;/package&gt;
&lt;package&gt;libibcm&#45;devel&lt;/package&gt;
&lt;package&gt;libibumad&lt;/package&gt;
&lt;package&gt;libibumad&#45;devel&lt;/package&gt;
&lt;package&gt;libibumad&#45;static&lt;/package&gt;
&lt;package&gt;libibmad&lt;/package&gt;
&lt;package&gt;libibmad&#45;devel&lt;/package&gt;
&lt;package&gt;libibmad&#45;static&lt;/package&gt;
&lt;package&gt;ibsim&lt;/package&gt;
&lt;package&gt;librdmacm&lt;/package&gt;
&lt;package&gt;librdmacm&#45;utils&lt;/package&gt;
&lt;package&gt;librdmacm&#45;devel&lt;/package&gt;
&lt;package&gt;libsdp&lt;/package&gt;
&lt;package&gt;libsdp&#45;devel&lt;/package&gt;
&lt;package&gt;opensm&#45;libs&lt;/package&gt;
&lt;package&gt;opensm&lt;/package&gt;
&lt;package&gt;opensm&#45;devel&lt;/package&gt;
&lt;package&gt;opensm&#45;static&lt;/package&gt;
&lt;package&gt;compat&#45;dapl&lt;/package&gt;
&lt;package&gt;compat&#45;dapl&#45;devel&lt;/package&gt;
&lt;package&gt;dapl&lt;/package&gt;
&lt;package&gt;dapl&#45;devel&lt;/package&gt;
&lt;package&gt;dapl&#45;devel&#45;static&lt;/package&gt;
&lt;package&gt;dapl&#45;utils&lt;/package&gt;
&lt;package&gt;perftest&lt;/package&gt;
&lt;package&gt;mstflint&lt;/package&gt;
&lt;package&gt;sdpnetstat&lt;/package&gt;
&lt;package&gt;srptools&lt;/package&gt;
&lt;package&gt;rds&#45;tools&lt;/package&gt;
&lt;package&gt;ibutils&lt;/package&gt;
&lt;package&gt;infiniband&#45;diags&lt;/package&gt;
&lt;package&gt;qperf&lt;/package&gt;
&lt;package&gt;ofed&#45;docs&lt;/package&gt;
&lt;package&gt;tgt&lt;/package&gt;
&lt;package&gt;mpi&#45;selector&lt;/package&gt;
&lt;package&gt;mvapich_gcc&lt;/package&gt;
&lt;package&gt;mvapich2_gcc&lt;/package&gt;
&lt;package&gt;openmpi_gcc&lt;/package&gt;
&lt;package&gt;mpitests_mvapich_gcc&lt;/package&gt;
&lt;package&gt;mpitests_mvapich2_gcc&lt;/package&gt;
&lt;package&gt;mpitests_openmpi_gcc&lt;/package&gt;
&lt;package&gt;open&#45;iscsi&lt;/package&gt;

Once you have saved extend-compute.xml, you should check to make sure there are no XML errors by running this command

&#35; xmllint &#45;noout extend&#45;compute.xml

Rebuild the distribution

&#35; cd /export/rocks/install
&#35; rocks create distro

Reinstall the nodes

This assumes you have already installed your nodes. If you have not yet installed your compute nodes you can skip this step and just start adding your compute nodes with insert-ethers

&#35; rocks set host boot compute action&#61;install
&#35; rocks run host compute reboot

Check to see that you have basic connectivity

Go to one of your compute nodes and see if basic IB connectivity is working. Here is the output from my 8 node cluster

&#35; ssh compute&#45;0&#45;0
&#35; ibhosts
Ca      &#58; 0x0005ad0000047000 ports 2 &quot;MT25208 InfiniHostEx Mellanox Technologies&quot;
Ca      &#58; 0x0005ad00000552a0 ports 2 &quot;compute&#45;0&#45;1 HCA&#45;1&quot;
Ca      &#58; 0x0005ad0000055268 ports 2 &quot;compute&#45;0&#45;5 HCA&#45;1&quot;
Ca      &#58; 0x0005ad0000055288 ports 2 &quot;compute&#45;0&#45;7 HCA&#45;1&quot;
Ca      &#58; 0x0005ad00000552ac ports 2 &quot;compute&#45;0&#45;6 HCA&#45;1&quot;
Ca      &#58; 0x0005ad00000552a4 ports 2 &quot;compute&#45;0&#45;3 HCA&#45;1&quot;
Ca      &#58; 0x0005ad0000055298 ports 2 &quot;compute&#45;0&#45;2 HCA&#45;1&quot;
Ca      &#58; 0x0005ad00000552bc ports 2 &quot;compute&#45;0&#45;4 HCA&#45;1&quot;
Ca      &#58; 0x0005ad000005529c ports 2 &quot;compute&#45;0&#45;0 HCA&#45;1&quot;
If this case, my IB switch has a subnet manager (opensm) running. If your switch does not have subnet manager software, you will want to pick one of the nodes to run the subnet manager. Without a subnet manager of some type your IB fabric will not work. Ideally, you would use the head node for the subnet manger.
&#35; chkconfig opensmd on
&#35; service opensmd start
Starting IB Subnet Manager.                                &#91;  OK  &#93;

Test things out with mvapich

As a regular user, let's see if we can really run an MPI program. I am going to run this test outside of any queue system. First we'll create a hostfile for mpirun to use. I'll put all of my compute nodes in this file so it looks like this

$ cat hostfile
compute&#45;0&#45;0
compute&#45;0&#45;1
compute&#45;0&#45;2
compute&#45;0&#45;3
compute&#45;0&#45;4
compute&#45;0&#45;5
compute&#45;0&#45;6
compute&#45;0&#45;7
The mvapich that is supplied with OFED-1.5 includes some test you can run. Let's run those:
$ /usr/mpi/gcc/mvapich&#45;1.2.0/bin/mpirun &#45;np 8 &#45;hostfile hostfile /usr/mpi/gcc/mvapich&#45;1.2.0/tests/osu_benchmarks&#45;3.1.1/osu_alltoall
libibverbs&#58; Warning&#58; RLIMIT_MEMLOCK is 32768 bytes.
    This will severely limit memory registrations.
Oops.. that doesn't look good! What you have run into here is the fact that the default settings for users is to have a very limited amount of memory that can be locked. To fix this we need to do 2 things. Edit/add the file /etc/sysconfig/sshd on the head node and add these lines somewhere near the top of the file.
&#35; Fix for RLIMIT_MEMLOCK problem
ulimit &#45;l unlimited
You then have to restart the sshd process and log back into your head node. You also need to push this to all the compute nodes. In the <post></post> section of extend-compute.xml you could do it like this.
&amp;lt&#59;file name&amp;&#35;61&#59;&amp;quot&#59;/etc/sysconfig/sshd&amp;quot&#59; mode&amp;&#35;61&#59;&amp;quot&#59;append&amp;quot&#59;&amp;gt&#59;
&amp;&#35;35&#59; Fix for RLIMIT_MEMLOCK problem
ulimit &amp;&#35;45&#59;l unlimited
&amp;lt&#59;/file&amp;gt&#59;
Add the above lines and recreate the distro and reinstall the nodes
&amp;&#35;35&#59; cd /export/rocks/install
&amp;&#35;35&#59; rocks create distro
&amp;&#35;35&#59; rocks set host boot compute action&amp;&#35;61&#59;install
&amp;&#35;35&#59; rocks run host compute reboot
After the nodes come up, let's try that run again!
$ /usr/mpi/gcc/mvapich&amp;&#35;45&#59;1.2.0/bin/mpirun &amp;&#35;45&#59;np 8 &amp;&#35;45&#59;hostfile hostfile /usr/mpi/gcc/mvapich&amp;&#35;45&#59;1.2.0/tests/osu_benchmarks&amp;&#35;45&#59;3.1.1/osu_alltoall
&amp;&#35;35&#59; OSU MPI All&amp;&#35;45&#59;to&amp;&#35;45&#59;All Personalized Exchange Latency Test v3.1.1
&amp;&#35;35&#59; Size            Latency (us)
1                        22.63
2                        22.52
4                        22.89
8                        23.16
16                       23.54
32                       26.34
64                       27.65
128                      31.07
256                      35.51
512                      41.28
1024                     56.06
2048                     89.68
4096                    220.10
8192                    341.34
16384                   430.56
32768                   656.67
65536                  1106.93
131072                 2023.19
262144                 3863.54
524288                 7483.98
1048576               14571.77
Those latency numbers are ok. You are doing an All-to-All test. This isn't a single node to a single node. Let's try the those single node tests. Same hostfile, but this time we'll just run on two of the nodes
$ /usr/mpi/gcc/mvapich&amp;&#35;45&#59;1.2.0/bin/mpirun &amp;&#35;45&#59;np 2 &amp;&#35;45&#59;hostfile hostfile /usr/mpi/gcc/mvapich&amp;&#35;45&#59;1.2.0/tests/osu_benchmarks&amp;&#35;45&#59;3.1.1/osu_bw
&amp;&#35;35&#59; OSU MPI Bandwidth Test v3.1.1
&amp;&#35;35&#59; Size        Bandwidth (MB/s)
1                         2.33
2                         4.66
4                         9.63
8                        18.44
16                       35.53
32                       66.93
64                      126.42
128                     233.19
256                     368.43
512                     507.09
1024                    632.22
2048                    749.37
4096                    820.82
8192                    874.60
16384                   833.79
32768                   895.17
65536                   928.47
131072                  946.37
262144                  954.75
524288                  959.31
1048576                 961.69
2097152                 962.69
4194304                 963.26
&amp;&#35;91&#59;tim@underlord ~&amp;&#35;93&#59;$ /usr/mpi/gcc/mvapich&amp;&#35;45&#59;1.2.0/bin/mpirun &amp;&#35;45&#59;np 2 &amp;&#35;45&#59;hostfile hostfile /usr/mpi/gcc/mvapich&amp;&#35;45&#59;1.2.0/tests/osu_benchmarks&amp;&#35;45&#59;3.1.1/osu_latency
&amp;&#35;35&#59; OSU MPI Latency Test v3.1.1
&amp;&#35;35&#59; Size            Latency (us)
0                         3.59
1                         3.62
2                         3.66
4                         3.62
8                         3.64
16                        3.65
32                        3.80
64                        3.90
128                       4.87
256                       5.31
512                       6.15
1024                      7.51
2048                      8.95
4096                     12.17
8192                     19.10
16384                    36.82
32768                    54.30
65536                    87.86
131072                  156.34
262144                  292.11
524288                  564.71
1048576                1108.91
2097152                2198.01
4194304                4372.93
Those are the numbers you can expect for DDR Infinihost III cards.

Do I need to setup an IP interface for the Infiniband

The short answer is "no". None of the standard MPI versions that you would use on and Infiniband network (mvapich, mvapich2, openmpi) require the Infiniband interface to be running IP.

I don't have to setup and IP interface, but how can I do this if I want to?

You first need to create the Infiniband network in Rocks. In this example I've called the network "ibnet"

rocks add network ibnet subnet&amp;&#35;61&#59;192.168.2.0 netmask 255.255.255.0

These next steps adds the new interface and connects it to the ibnet network.

rocks add host interface compute&amp;&#35;45&#59;RACK&amp;&#35;45&#59;RANK ib0
rocks set host interface ip compute&amp;&#35;45&#59;RACK&amp;&#35;45&#59;RANK ib0 192.168.2.X
rocks set host interface name compute&amp;&#35;45&#59;RACK&amp;&#35;45&#59;RANK ib0 icompute&amp;&#35;45&#59;rack&amp;&#35;45&#59;rank
rocks set host interface module compute&amp;&#35;45&#59;RACK&amp;&#35;45&#59;RANK ib0 ib_ipoib
rocks set host interface subnet icompute&amp;&#35;45&#59;RACK&amp;&#35;45&#59;RANK ib0 ibnet

There is also a more crude way of doing it, with some sed magic in extend-compute.xml. The line below assumes that I have configured the cluster to have a private IP network of 192.168.1.0/255.255.255.0 and that I will be using 192.168.2.0/255.255.255.0 for the IPoIB network. Stick this in the <post></post> section.

cat /etc/sysconfig/network&amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;scripts/ifcfg&amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;eth0 &amp;amp&#59;&amp;&#35;35&#59;124&amp;&#35;59&#59; grep &amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;v HWADDR &amp;amp&#59;&amp;&#35;35&#59;124&amp;&#35;59&#59; grep &amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;v MTU &amp;amp&#59;&amp;&#35;35&#59;124&amp;&#35;59&#59; sed &amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;e s/168.1/168.2/ &amp;amp&#59;&amp;&#35;35&#59;124&amp;&#35;59&#59; sed &amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;e s/eth0/ib0/ &amp;amp&#59;gt&amp;&#35;59&#59; /etc/sysconfig/network&amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;scripts/ifcfg&amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;ib0

I've got IPoIB configured now, what good does this do me?

You could serve data from the head node via NFS. Let' say you have a large partition called /data and the underlying file system can give you better than 100MB/s performance. A gigabit network is going to become a bottleneck, so let's share this partition out with IPoIB. Assuming you have configured your network interface on the head node using the above sed command, you have three more steps.

  • Add the filesystem to /etc/exports with the line
/data 192.168.2.0/255.255.255.0(rw,async)
  • Add the IPoIB interface to /etc/sysconfig/iptables. After the line
&amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;A INPUT &amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;i eth0 &amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;j ACCEPT
add the line
&amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;A INPUT &amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;i ib0 &amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;j ACCEPT
and restart iptables
&amp;amp&#59;&amp;&#35;35&#59;35&amp;&#35;59&#59; /sbin/service iptables restart 
  • I'm lazy and add an entry like this to /etc/auto.home
 data 192.168.2.1&amp;amp&#59;&amp;&#35;35&#59;58&amp;&#35;59&#59;/data
and push it out to the nodes with
 &amp;amp&#59;&amp;&#35;35&#59;35&amp;&#35;59&#59; rocks sync users 

How do I upgrade to the RHEL 5.5 2.6.18_194 kernel and not break my OFED installation?

If you are using Qlogic cards this will be a problem because there is a broken header issue with the RHEL 2.6.18_194 kernel and the OFED 1.5.1 distribution. If you don't have Qlogic cards then you can upgrade the kernel using the following procedure.

  • Download the packages into the contrib directory the head node, update the head node, and reboot
&amp;amp&#59;&amp;&#35;35&#59;35&amp;&#35;59&#59; yum update &amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;&amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;downloadonly &amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;&amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;downloaddir&amp;amp&#59;&amp;&#35;35&#59;61&amp;&#35;59&#59;/export/rocks/install/contrib/5.3/x86_64/RPMS kernel kernel&amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;devel kernel&amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;headers
&amp;amp&#59;&amp;&#35;35&#59;35&amp;&#35;59&#59; yum update kernel kernel&amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;devel kernel&amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;headers
&amp;amp&#59;&amp;&#35;35&#59;35&amp;&#35;59&#59; reboot
  • Remove the previous kernel-ib kernel-ib-devel and ib-bonding RPMs
&amp;amp&#59;&amp;&#35;35&#59;35&amp;&#35;59&#59; rpm &amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;e kernel&amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;ib&amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;devel&amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;1.5.1&amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;2.6.18_164.el5 kernel&amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;ib&amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;1.5.1&amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;2.6.18_164.el5 \
ib&amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;bonding&amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;0.9.0&amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;2.6.18_164.el5
  • Rebuild the source RPMs against the new kernel
&amp;amp&#59;&amp;&#35;35&#59;35&amp;&#35;59&#59; rpmbuild &amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;&amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;rebuild  &amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;&amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;define &amp;amp&#59;&amp;&#35;35&#59;39&amp;&#35;59&#59;_topdir /var/tmp//OFED_topdir&amp;amp&#59;&amp;&#35;35&#59;39&amp;&#35;59&#59; \
&amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;&amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;define &amp;amp&#59;&amp;&#35;35&#59;39&amp;&#35;59&#59;KVERSION 2.6.18&amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;194.el5&amp;amp&#59;&amp;&#35;35&#59;39&amp;&#35;59&#59; &amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;&amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;define &amp;amp&#59;&amp;&#35;35&#59;39&amp;&#35;59&#59;_release 2.6.18_194.el5&amp;amp&#59;&amp;&#35;35&#59;39&amp;&#35;59&#59; \
&amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;&amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;define &amp;amp&#59;&amp;&#35;35&#59;39&amp;&#35;59&#59;force_all_os 0&amp;amp&#59;&amp;&#35;35&#59;39&amp;&#35;59&#59; &amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;&amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;define &amp;amp&#59;&amp;&#35;35&#59;39&amp;&#35;59&#59;_prefix /usr&amp;amp&#59;&amp;&#35;35&#59;39&amp;&#35;59&#59; &amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;&amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;define &amp;amp&#59;&amp;&#35;35&#59;39&amp;&#35;59&#59;__arch_install_post %&amp;amp&#59;&amp;&#35;35&#59;123&amp;&#35;59&#59;nil&amp;amp&#59;&amp;&#35;35&#59;125&amp;&#35;59&#59;&amp;amp&#59;&amp;&#35;35&#59;39&amp;&#35;59&#59; \
/root/ofed/OFED&amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;1.5.1/SRPMS/ib&amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;bonding&amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;0.9.0&amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;42.src.rpm
  • Install the RPMs you just built and reboot
&amp;amp&#59;&amp;&#35;35&#59;35&amp;&#35;59&#59; cd /var/tmp/OFED_topdir/RPMS/x86_64
&amp;amp&#59;&amp;&#35;35&#59;35&amp;&#35;59&#59; rpm &amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;ivh ib&amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;bonding&amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;0.9.0&amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;2.6.18_194.el5.x86_64.rpm \
kernel&amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;ib&amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;1.5.1&amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;2.6.18_194.el5.x86_64.rpm \
kernel&amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;ib&amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;devel&amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;1.5.1&amp;amp&#59;&amp;&#35;35&#59;45&amp;&#35;59&#59;2.6.18_194.el5.x86_64.rpm
&amp;amp&#59;&amp;&#35;35&#59;35&amp;&#35;59&#59; reboot
  • Copy the new RPMs to the contrib directory, rebuild the distro, and reinstall the nodes
&amp;amp&#59;&amp;&#35;35&#59;35&amp;&#35;59&#59; cd /var/tmp/OFED_topdir/RPMS/x86_64
&amp;amp&#59;&amp;&#35;35&#59;35&amp;&#35;59&#59; cp &amp;amp&#59;&amp;&#35;35&#59;42&amp;&#35;59&#59; /export/rocks/install/contrib/5.3/x86_64/RPMS
&amp;amp&#59;&amp;&#35;35&#59;35&amp;&#35;59&#59; cd /export/rocks/install
&amp;amp&#59;&amp;&#35;35&#59;35&amp;&#35;59&#59; rocks create distro
&amp;amp&#59;&amp;&#35;35&#59;35&amp;&#35;59&#59; rocks set host boot compute action&amp;amp&#59;&amp;&#35;35&#59;61&amp;&#35;59&#59;install
&amp;amp&#59;&amp;&#35;35&#59;35&amp;&#35;59&#59; rocks run host compute command&amp;amp&#59;&amp;&#35;35&#59;61&amp;&#35;59&#59;reboot
You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
Press h to open a hovercard with more details.