Skip to content
This repository has been archived by the owner on May 6, 2020. It is now read-only.

Commit

Permalink
packet: Always mount node local storage on /mnt/
Browse files Browse the repository at this point in the history
The rkt pod is exposed with /mnt/ volume[1] and in this PR:

	https://github.com/kinvolk/lokomotive-kubernetes/pull/48/files#diff-b6c7caf796cd86bdcdd936319b1793a1R152

the location where the RAID 0 array is mounted was changed from `/mnt`
to `/mnt/<some-dir>`. This breaks using local volumes in pods, as it
seems the mount is not visible for the rkt container or the kubelet
running in rkt.

The investigation of the root cause of this issue, to understand why
mounts inside mnt (like in `/mnt/node-local-storage`) can't be used by
pods currently and what needs to be changed for that, is left as a for
future patches and issue #73 was created to investigate this.

This patch just makes the mount location back to `/mnt` for all the cases
(`setup_raid`, `setup_raid_*`) so it works on all cases. However, this
imposes one limitation: `setup_raid_hdd` and `setup_raid_ssd` are mutually
exclusive now.

This limitation is not limiting something that was working in master, in
fact `setup_raid_hdd` and `setup_raid_ssd` (when `setup_raid_ssd_fs` is set)
were completely broken since they were merged and pods never wrote to
those volumes, for the very same reason issue #73 states: mounts insde
`/mnt` are not visible to pods.

Therefore, this patches fixes node local storage while making those
options exclusive (only one can be set), and this is not a big problem
as those options never really worked.

Some alternative fixes were considered, like changing the path that is
exposed to rkt container to be /mnt/node-local-storage or
/mnt/node-hdd-local-storage, according to what option was used
(setup_raid, setup_raid_hdd, and exposing both if both are set), but
that was messy without any good reason (it is better to tackle #73
before doing something that ofuscated). So, we decided for this, more
simpler, approach.

This patch is just a minimal fix, "a revert" to mount in `/mnt/` again,
to make this work again on master.

As a side effect of this PR, too, another issue to reconsider if we need
so many `setup_raid_*` flags was created
(#74) and to even
consider a totally different approach than the current bash script
before it gets out of control: #75

[1]: https://github.com/kinvolk/lokomotive-kubernetes/blob/d59d071a451f45ac61c2524b94a146a6cde60401/packet/flatcar-linux/kubernetes/workers/cl/worker.yaml.tmpl#L65-L66
  • Loading branch information
rata committed Sep 27, 2019
1 parent 1b94e10 commit 7c4b6bc
Show file tree
Hide file tree
Showing 2 changed files with 16 additions and 14 deletions.
24 changes: 13 additions & 11 deletions packet/flatcar-linux/kubernetes/workers/cl/worker.yaml.tmpl
Original file line number Diff line number Diff line change
Expand Up @@ -195,10 +195,9 @@ storage:

if [ "$${setup_fs_on_raid}" = true ]; then
mkfs.ext4 "$${device_path}"
mkdir "/mnt/$${array_name}"
mount "$${device_path}" "/mnt/$${array_name}"
mount "$${device_path}" "/mnt/"
# Make mount persistent across reboots
echo "$${device_path} /mnt/$${array_name} ext4 defaults,nofail,discard 0 0" | tee -a /etc/fstab
echo "$${device_path} /mnt/ ext4 defaults,nofail,discard 0 0" | tee -a /etc/fstab
fi
}

Expand All @@ -211,17 +210,20 @@ storage:
# https://www.kernel.org/doc/Documentation/admin-guide/devices.txt
major_numbers="8,259"

# XXX: These options are exclusive, as only one fs can be mounted
# to /mnt/
# This is, partly, because when creating dirs inside /mnt to mount
# several paths (like /mnt/node-local-storage), those are not visible
# to the pods. See this issue for more info:
# https://github.com/kinvolk/lokomotive-kubernetes/issues/73
#
# Variables replaced by Terraform
if [ ${setup_raid} = true ]; then
create_data_raid "$${major_numbers}" -1 /dev/md/node-local-storage true
else
# Both can be set independently
if [ ${setup_raid_hdd} = true ]; then
create_data_raid "$${major_numbers}" 1 /dev/md/node-local-hdd-storage true
fi
if [ ${setup_raid_ssd} = true ]; then
create_data_raid "$${major_numbers}" 0 /dev/md/node-local-ssd-storage ${setup_raid_ssd_fs}
fi
elif [ ${setup_raid_hdd} = true ]; then
create_data_raid "$${major_numbers}" 1 /dev/md/node-local-hdd-storage true
elif [ ${setup_raid_ssd} = true ]; then
create_data_raid "$${major_numbers}" 0 /dev/md/node-local-ssd-storage ${setup_raid_ssd_fs}
fi
- path: /etc/kubernetes/kubeconfig
filesystem: root
Expand Down
6 changes: 3 additions & 3 deletions packet/flatcar-linux/kubernetes/workers/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -80,19 +80,19 @@ EOD
}

variable "setup_raid" {
description = "Attempt to create a RAID 0 from extra disks to be used for persistent container storage. Valid values: \"true\", \"false\""
description = "Attempt to create a RAID 0 from extra disks to be used for persistent container storage. Can't be used with setup_raid_hdd nor setup_raid_sdd. Valid values: \"true\", \"false\""
type = "string"
default = "false"
}

variable "setup_raid_hdd" {
description = "Attempt to create a RAID 0 from extra Hard Disk drives only, to be used for persistent container storage. Valid values: \"true\", \"false\""
description = "Attempt to create a RAID 0 from extra Hard Disk drives only, to be used for persistent container storage. Can't be used with setup_raid nor setup_raid_sdd. Valid values: \"true\", \"false\""
type = "string"
default = "false"
}

variable "setup_raid_ssd" {
description = "Attempt to create a RAID 0 from extra Solid State Drives only, to be used for persistent container storage. Valid values: \"true\", \"false\""
description = "Attempt to create a RAID 0 from extra Solid State Drives only, to be used for persistent container storage. Can't be used with setup_raid nor setup_raid_hdd. Valid values: \"true\", \"false\""
type = "string"
default = "false"
}
Expand Down

0 comments on commit 7c4b6bc

Please sign in to comment.