Skip to content
This repository has been archived by the owner on May 6, 2020. It is now read-only.

packet: Consider removing tf variable setup_raid_* #74

Open
rata opened this issue Sep 27, 2019 · 1 comment
Open

packet: Consider removing tf variable setup_raid_* #74

rata opened this issue Sep 27, 2019 · 1 comment

Comments

@rata
Copy link
Contributor

rata commented Sep 27, 2019

Right now there are several variables: setup_raid_ssd_fs, setup_raid_ssd, setup_raid_hdd and setup_raid.

The code is messy and cause several regressions already (#72 and 85ef5fa to name two, I'm sure there are more).

We should re-consider why these variables were added and see if we can remove some to simplify things. Currently, also, they can't be used at the same time as it was expected, because of: #73.

The goal of this issue is to double check we need these variables or not, to make sure we remove all the unneeded code

rata added a commit that referenced this issue Sep 27, 2019
The rkt pod is exposed with /mnt/ volume[1] and in this PR:

	https://github.com/kinvolk/lokomotive-kubernetes/pull/48/files#diff-b6c7caf796cd86bdcdd936319b1793a1R152

the location where the RAID 0 array is mounted was changed from `/mnt`
to `/mnt/<some-dir>`. This breaks using local volumes in pods, as it
seems the mount is not visible for the rkt container or the kubelet
running in rkt.

The investigation of the root cause of this issue, to understand why
mounts inside mnt (like in `/mnt/node-local-storage`) can't be used by
pods currently and what needs to be changed for that, is left as a for
future patches and issue #73 was created to investigate this.

This patch just makes the mount location back to `/mnt` for all the cases
(`setup_raid`, `setup_raid_*`) so it works on all cases. However, this
imposes one limitation: `setup_raid_hdd` and `setup_raid_ssd` are mutually
exclusive now.

This limitation is not limiting something that was working in master, in
fact `setup_raid_hdd` and `setup_raid_ssd` (when `setup_raid_ssd_fs` is set)
were completely broken since they were merged and pods never wrote to
those volumes, for the very same reason issue #73 states: mounts insde
`/mnt` are not visible to pods.

Therefore, this patches fixes node local storage while making those
options exclusive (only one can be set), and this is not a big problem
as those options never really worked.

Some alternative fixes were considered, like changing the path that is
exposed to rkt container to be /mnt/node-local-storage or
/mnt/node-hdd-local-storage, according to what option was used
(setup_raid, setup_raid_hdd, and exposing both if both are set), but
that was messy without any good reason (it is better to tackle #73
before doing something that ofuscated). So, we decided for this, more
simpler, approach.

This patch is just a minimal fix, "a revert" to mount in `/mnt/` again,
to make this work again on master.

As a side effect of this PR, too, another issue to reconsider if we need
so many `setup_raid_*` flags was created
(#74) and to even
consider a totally different approach than the current bash script
before it gets out of control: #75

[1]: https://github.com/kinvolk/lokomotive-kubernetes/blob/d59d071a451f45ac61c2524b94a146a6cde60401/packet/flatcar-linux/kubernetes/workers/cl/worker.yaml.tmpl#L65-L66
rata added a commit that referenced this issue Sep 27, 2019
The rkt pod is exposed with /mnt/ volume[1] and in this PR:

	https://github.com/kinvolk/lokomotive-kubernetes/pull/48/files#diff-b6c7caf796cd86bdcdd936319b1793a1R152

the location where the RAID 0 array is mounted was changed from `/mnt`
to `/mnt/<some-dir>`. This breaks using local volumes in pods, as it
seems the mount is not visible for the rkt container or the kubelet
running in rkt.

The investigation of the root cause of this issue, to understand why
mounts inside mnt (like in `/mnt/node-local-storage`) can't be used by
pods currently and what needs to be changed for that, is left as a for
future patches and issue #73 was created to investigate this.

This patch just makes the mount location back to `/mnt` for all the cases
(`setup_raid`, `setup_raid_*`) so it works on all cases. However, this
imposes one limitation: `setup_raid_hdd` and `setup_raid_ssd` are mutually
exclusive now.

This limitation is not limiting something that was working in master, in
fact `setup_raid_hdd` and `setup_raid_ssd` (when `setup_raid_ssd_fs` is set)
were completely broken since they were merged and pods never wrote to
those volumes, for the very same reason issue #73 states: mounts insde
`/mnt` are not visible to pods.

Therefore, this patches fixes node local storage while making those
options exclusive (only one can be set), and this is not a big problem
as those options never really worked.

Some alternative fixes were considered, like changing the path that is
exposed to rkt container to be /mnt/node-local-storage or
/mnt/node-hdd-local-storage, according to what option was used
(setup_raid, setup_raid_hdd, and exposing both if both are set), but
that was messy without any good reason (it is better to tackle #73
before doing something that ofuscated). So, we decided for this, more
simpler, approach.

This patch is just a minimal fix, "a revert" to mount in `/mnt/` again,
to make this work again on master.

As a side effect of this PR, too, another issue to reconsider if we need
so many `setup_raid_*` flags was created
(#74) and to even
consider a totally different approach than the current bash script
before it gets out of control: #75

[1]: https://github.com/kinvolk/lokomotive-kubernetes/blob/d59d071a451f45ac61c2524b94a146a6cde60401/packet/flatcar-linux/kubernetes/workers/cl/worker.yaml.tmpl#L65-L66
@surajssd
Copy link
Contributor

surajssd commented Oct 3, 2019

We at least need creating RAID of SSD devices, if nothing else.

This benefits Rook Ceph component. When we use SSDs(or RAID of SSD) for Ceph's metadata storage performance in general is improved.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants