-
Notifications
You must be signed in to change notification settings - Fork 0
Cluster Commands
The following are user-defined cluster command functions that will also facilitate communication across the cluster network. Many thanks to Andrew and his guide for coming up with this idea.
All of these functions
are to be defined in each Pi's ~/.bashrc
file and note that some of these depend on having the IP addresses mapped to each hostname in the /etc/hosts
file.
Before we proceed, let's back up current ~/.bashrc
file in every Pi just in case
cp ~/.bashrc ~/.bashrc.bak
If you mess up the file, you can always revert back your changes using
cp ~/.bashrc.bak ~/.bashrc
source ~/.bashrc
Download our cluster_command.sh script to ~/scripts/
dir
mkdir -p ~/scripts
cd ~/scripts
wget https://github.com/gitluis/pi-cluster/blob/master/scripts/cluster_commands.sh
Edit ~/.bashrc
file to include cluster_commands.sh
file
sudo nano ~/.bashrc
Append the following lines at the bottom
...
# including Pi-Cluster commands
source ~/scripts/cluster_commands.sh
Source for changes to take effect
source ~/.bashrc
Copy over the ~/.bashrc
file to all workers
cluster-scp ~/.bashrc
cluster_commands.sh
# ------------------- #
# Pi-Cluster Commands #
# ------------------- #
# define username across the cluster
readonly USER=pi
# return the user common across all nodes in the cluster
function cluster-user {
echo $USER
}
# return the IP address and mapped hostname of each node in the cluster
function nodes {
grep "master" /etc/hosts | awk '{print $0}' && \
grep "worker" /etc/hosts | awk '{print $0}'
}
# return only the hostname of each worker in the cluster
function workers {
grep "worker" /etc/hosts | awk '{print $2}'
}
# send a command to be executed by all workers in the cluster
function master-cmd {
for worker in $(workers)
do
ssh $USER@$worker "$@"
done
}
# send a command to be executed by all nodes in the cluster
function cluster-cmd {
master-cmd $@
$@
}
# reboot all nodes in the cluster at the same time
function cluster-reboot {
cluster-cmd shutdown -r now
}
# shutdown all nodes in the cluster at the same time
function cluster-shutdown {
cluster-cmd shutdown now
}
# transfer files from one node to all other nodes
function cluster-scp {
for worker in $(workers)
do
cat $1 | ssh $USER@$worker "sudo tee $1" > /dev/null 2>&1
done
}
Define a common user across all of the nodes in the cluster.
Edit ~/scripts/cluster_commands.sh
# define username across the cluster
readonly USER=pi
If you are using a different username than pi
, change it accordingly.
Return the user common across all nodes in the cluster.
Edit ~/scripts/cluster_commands.sh
nano ~/scripts/cluster_commands.sh
Append the following function
# return the user common across all nodes in the cluster
function cluster-user {
echo $USER
}
Source for changes to take effect
source ~/.bashrc
Usage example
$ cluster-user
pi
Return the IP address and mapped hostname of each node in the cluster.
Edit ~/scripts/cluster_commands.sh
nano ~/scripts/cluster_commands.sh
Append the following function
# return the IP address and mapped hostname of each node in the cluster
function nodes {
grep "master" /etc/hosts | awk '{print $0}' && \
grep "worker" /etc/hosts | awk '{print $0}'
}
Source for changes to take effect
source ~/.bashrc
Usage example
$ nodes
192.168.0.100 master
192.168.0.101 worker1
192.168.0.102 worker2
192.168.0.103 worker3
192.168.0.104 worker4
Return only the hostname of each worker in the cluster.
Edit ~/scripts/cluster_commands.sh
nano ~/scripts/cluster_commands.sh
Append the following function
# return only the hostname of each worker in the cluster
function workers {
grep "worker" /etc/hosts | awk '{print $2}'
}
Source for changes to take effect
source ~/.bashrc
Usage example
$ workers
worker1
worker2
worker3
worker4
Send a command to be executed by all workers in the cluster.
Edit ~/scripts/cluster_commands.sh
nano ~/scripts/cluster_commands.sh
Append the following function
# send a command to be executed by all workers in the cluster
function master-cmd {
for worker in $(workers)
do
ssh $USER@$worker "$@"
done
}
Source for changes to take effect
source ~/.bashrc
Usage example
$ master-cmd date
Sat Feb 23 10:38:56 IST 2019
Sat Feb 23 10:42:21 IST 2019
Sat Feb 23 11:21:60 IST 2019
Sat Feb 23 12:53:50 IST 2019
Send a command to be executed by all nodes in the cluster.
The difference between these master-cmd
and cluster-cmd
is that sometimes you want workers to do something but not the master, for that case you can use master-cmd
. And for the case in which you want all nodes in the cluster to do something, you use cluster-cmd
instead.
Edit ~/scripts/cluster_commands.sh
nano ~/scripts/cluster_commands.sh
Append the following function
# send a command to be executed by all nodes in the cluster
function cluster-cmd {
master-cmd $@
$@
}
Source for changes to take effect
source ~/.bashrc
Usage example
$ cluster-cmd date
Sat Feb 23 10:38:56 IST 2019
Sat Feb 23 10:42:21 IST 2019
Sat Feb 23 11:21:60 IST 2019
Sat Feb 23 12:53:50 IST 2019
Sat Feb 23 09:42:30 IST 2019
Note: The reason for $@
being after master-cmd $@
is so that the command is executed in the workers first and then in the master node. If a reboot
or shutdown
command is passed and was to be executed in the master node first then the master would be halted before it can tell the workers what they were supposed to do, thus rebooting/shutting down the master but not the workers.
As you may have seen, each Pi's date and time may not be synchronized. It may not be a problem now, but for those people who intend on creating "crontab" jobs, which are nothing more but scheduled job or tasks, it could be a critical issue.
First, we need to make sure htpdate
is installed on all of our Pi boards
cluster-cmd "sudo apt install htpdate -y > /dev/null 2&1"
That will align the nodes within +/- 10 seconds.
A more reliable and robust way is to have them synchronized using a remote server such as time.nist.gov
cluster-cmd sudo htpdate -a -l time.nist.gov
This may take a few minutes before it finished yet it will sync all clocks. Being off by less than 5 seconds is good enough.
Reboot all nodes in the cluster at the same time.
Edit ~/scripts/cluster_commands.sh
nano ~/scripts/cluster_commands.sh
Append the following function
# reboot all nodes in the cluster at the same time
function cluster-reboot {
cluster-cmd shutdown -r now
}
Source for changes to take effect
source ~/.bashrc
Usage example
cluster-reboot
Shutdown all nodes in the cluster at the same time.
Edit ~/scripts/cluster_commands.sh
nano ~/scripts/cluster_commands.sh
Append the following function
# shutdown all nodes in the cluster at the same time
function cluster-shutdown {
cluster-cmd shutdown now
}
Source for changes to take effect
source ~/.bashrc
Usage example
cluster-shutdown
Transfer (secure copy) files from one node to all other nodes.
Edit ~/scripts/cluster_commands.sh
nano ~/scripts/cluster_commands.sh
Append the following function
# transfer files from one node to all other nodes
function cluster-scp {
for worker in $(workers)
do
cat $1 | ssh $USER@$worker "sudo tee $1" > /dev/null 2>&1
done
}
Source for changes to take effect
source ~/.bashrc
Usage example
cluster-scp ~/.bashrc
This copies over (overwrite) the .bashrc
file to all workers.