Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

democratic-csi plugin #113

Merged
merged 4 commits into from
Mar 21, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions packs/democratic_csi_nfs/CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# 0.1.0

- Initial release
180 changes: 180 additions & 0 deletions packs/democratic_csi_nfs/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,180 @@
# `democratic-csi` CSI plugin

This pack deploys two jobs that run the
[`democratic-csi`](https://github.com/democratic-csi/democratic-csi)
CSI plugin. The node plugin tasks will be run as a system job, and the
controller tasks will be run as a service job.

## Client Requirements

This pack can only be run on Nomad clients that have enabled volumes and
privileged mode for the Docker task driver. In addition, clients will
need to have a source NFS volume mounted to any client host that runs
the controller task.

### Example NFS Server
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The democratic-csi plugin supports a lot of different NFS and ZFS setups. I'm not sure if we want to try to support arbitrary configurations in this one pack. It'd be awfully nice, but the client requirements end up being a chore to document (and not feasible for us to test, where this one is pretty simple). Open to suggestion here though for sure.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we did want to support a broader range of configs, we'd want to have something more like a blob or object for the config file (ref #113 (comment))

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there are ways we could handle both using the sprig fail function in the template.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because it's very hard to test without buying an appropriate NAS appliance, I'm going to leave this as-is for now and if someone comes along later and wants to improve upon it they can.

Copy link

@angrycub angrycub Apr 14, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My point in the earlier comment is that you could provide variables to ingest a shapely configuration for something that is easily testable, but allow for a blob in cases where we don't have the ability to know the configuration in advance. Then it would be up to the template to fail if neither way was provided by the user at runtime. If they're bringing a custom config, then they will have to ensure that they are doing the right thing.


The following is an example of installing and configuring a NFS server
on an apt-based Linux distrobution (ex. Debian or Ubuntu). This
configuration exports the directory `/var/nfs/general` to any Nomad
client on the `192.168.56.0/24` address space (this is commonly used
for Vagrant hosts on Virtualbox).

```sh
sudo apt-get install nfs-kernel-server
sudo mkdir /var/nfs/general

sudo cat <<EOF > /etc/exports
/var/nfs/general 192.168.56.0/24(rw,sync,no_subtree_check,no_root_squash)
EOF

sudo systemctl enable nfs-kernel-server
sudo systemctl start nfs-kernel-server
```

The `democratic-csi` controller is unusual in needing to mount the NFS
volume because NFS doesn't have a remote API other than filesystem
operations. So this plugin has to have the NFS export bind-mounted
into the plugin container. Any client running the controller will need
to have the NFS export in the host's `/etc/fstab`. The following
configuration is an example for the NFS export shown above, assuming
that the NFS server can be found at IP `192.168.56.60`. The mount
point `/srv/nfs_data` shown here should be used for the
`nfs_controller_mount_path` variable.

```
sudo cat <<EOF >> /etc/fstab
192.168.56.60:/var/nfs/general /srv/nfs_data nfs4 rw,relatime 0 0
EOF
```

## Variables

The following variables are required:
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a way to express this requirement in our variables.hcl? There aren't really reasonable defaults for these values here.

Copy link
Contributor

@mikenomitch mikenomitch Apr 14, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No way to do this :(

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you don't default them, they should end up being required? If they aren't this seems like a bug. (It's inconsistent with my experience with Nomad's HCL2, that is)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, unfortunately. For example if I render this pack with no variables set I end up with a partially empty template:

      template {
        destination = "${NOMAD_TASK_DIR}/driver-config-file.yaml"

        data = <<EOH
driver: nfs-client
instance_id:
nfs:
  shareHost:
  shareBasePath: ""
  controllerBasePath: "/storage"
  dirPermissionsMode: "0777"
  dirPermissionsUser: root
  dirPermissionsGroup: root
EOH
      }

For Nomad fields, this usually up throwing an error when we try to submit the job because Nomad will reject it. But for something like the interior of a template, Nomad has no way of knowing whether the field is valid.


* `nfs_share_host` - The IP address of the host for the NFS share.
* `nfs_share_base_path` - The base directory exported from the NFS
share host.
* `nfs_controller_mount_path` - The path where the NFS mount is
mounted as a host volume for the controller plugin.

For example, using the NFS configuration described above:

```sh
nomad-pack plan \
-var nfs_share_host=192.168.56.60 \
-var nfs_share_base_path=/var/nfs/general \
-var nfs_controller_mount_path=/srv/nfs_data \
.
```

The following variables are optional:

* `job_name` (string "democratic_csi") - The prefix to use as the job
name for the plugins. For exmaple, if `job_name = "democratic_csi"`,
the plugin job will be named `democratic_csi_controller`.
* `datacenters` (list(string) ["dc1"]) - A list of datacenters in the
region which are eligible for task placement.
* `region` (string "global") - The region where the job should be
placed.
* `plugin_id` (string "org.democratic-csi.nfs") - The ID to register
in Nomad for the plugin.
* `plugin_namespace` (string "default") - The namespace for the plugin
job.
* `plugin_image` (string
"docker.io/democraticcsi/democratic-csi:latest") - The container
image for `democratic-csi`.
* `plugin_csi_spec_version` (string "1.5.0") - The CSI spec version
that democratic-csi will comply with.
* `plugin_log_level` (string "debug") - The log level for the plugin.
* `nfs_dir_permissions_mode` (string "0777") - The unix file
permissions mode for the created volumes.
* `nfs_dir_permissions_user` (string "root") - The unix user that owns
the created volumes.
* `nfs_dir_permissions_group` (string "root") - The unix group that
owns the created volumes.
* `controller_count` (number 2) - The number of controller instances
to be deployed (at least 2 recommended).
* `volume_id` (string "myvolume") - ID for the example volume spec to
output.
* `volume_namespace` (string "default") - Namespace for the example
volume spec to output.

#### `constraints` List of Objects

[Nomad job specification
constraints](https://www.nomadproject.io/docs/job-specification/constraint)
allow restricting the set of eligible nodes on which the tasks will
run. This pack automatically configures the following required
constraints:

* Plugin tasks will run on Linux hosts only
* Plugin tasks will run on hosts with the Docker driver's
[`volumes`](https://www.nomadproject.io/docs/drivers/docker#volumes-1)
enabled and
[`allow_privileged`](https://www.nomadproject.io/docs/drivers/docker#allow_privileged)
set to `true`.
* The controller plugin tasks will be deployed on distinct hosts.

You can set additional constraints with the `constraints` variable,
which takes a list of objects with the following fields:

* `attribute` (string) - Specifies the name or reference of the
attribute to examine for the constraint.
* `operator` (string) - Specifies the comparison operator. The
ordering is compared lexically.
* `value` (string) - Specifies the value to compare the attribute
against using the specified operation.

Below is also an example of how to pass `constraints` to the CLI with
with the `-var` argument.

```bash
nomad-pack run -var 'constraints=[{"attribute":"$${meta.my_custom_value}","operator":">","value":"3"}]' packs/democratic_csi_nfs
```

#### `resources` Object

* `cpu` (number 500) - Specifies the CPU required to run this task in
MHz.
* `memory` (number 256) - Specifies the memory required in MB.

## Volume creation

This pack outputs an example volume specification based on the plugin variables.

#### **`volume.hcl`**

```hcl
type = "csi"
id = "my_volume"
namespace = "default"
name = "my_volume"
plugin_id = "org.democratic-csi.nfs"
capability {
access_mode = "multi-node-multi-writer"
attachment_mode = "file-system"
}
capability {
access_mode = "single-node-writer"
attachment_mode = "file-system"
}
capability {
access_mode = "single-node-reader-only"
attachment_mode = "file-system"
}
mount_options {
mount_flags = ["noatime"]
}
```

Create this volume with the following command:

```sh
nomad volume create volume.hcl
```
11 changes: 11 additions & 0 deletions packs/democratic_csi_nfs/metadata.hcl
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
app {
url = "https://github.com/democratic-csi/democratic-csi"
author = "Tim Gross <tgross@hashicorp.com>"
}
Comment on lines +1 to +4
Copy link
Member Author

@tgross tgross Apr 12, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mikenomitch @angrycub something I noticed here is that we're conflating app.author and a hypothetical pack.maintainer metadata field. Most package managers have a distinction between the two because they're rarely the same person. Ex. redis on Debian Buster is maintained by Chris Lamb, who is not the author of Redis (antirez or Redis Labs).

We also don't include as a requirement any kind of contact info for the pack maintainer. Maybe we don't need it as required metadata for all registries but as a requirement for this registry, so that we have someone to ping as the codeowner. (Also maybe not an email address but a GitHub handle here?)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, this was originally the author of the underlying app itself (not the pack). But about 40% of PRs get this "wrong" and do pack author. IDK what value its really providing as author of the underlying app.

Git/GitHub will naturally point us to the maintainer (in a roundabout way), so I'm not sure we really even need it for that purpose.

Maybe its something we should just remove?

Copy link
Member Author

@tgross tgross Apr 14, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But about 40% of PRs get this "wrong" and do pack author.

Most likely because that's what Writing Packs recommends. Seems reasonable to me to remove it. I'll fix up the docs as well.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh wow! Maybe I'm "wrong" then 🙃

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've opened hashicorp/nomad-pack#240 to update the docs and/or discuss it more generally.


pack {
name = "democratic_csi_nfs"
description = "This pack deploys the democratic-csi plugin, configured for use with NFS"
url = "https://github.com/hashicorp/nomad-pack-community-registry/democratic_csi_nfs"
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The docs on this url field are a little vague. It's not really the URL to the directory, right?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not really the URL to the directory, right? It is... but @angrycub and I want to remove it and just make this implicit when they grab the registry itself. Because once you fork a registry, all of these are wrong.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I more meant that our docs say to use a URL like https://github.com/hashicorp/nomad-pack-community-registry/nginx but the actual URL to the pack directory is https://github.com/hashicorp/nomad-pack-community-registry/tree/main/packs/nginx

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've opened hashicorp/nomad-pack#240 to update the docs and/or discuss it more generally.

version = "0.1.0"
}
24 changes: 24 additions & 0 deletions packs/democratic_csi_nfs/outputs.tpl
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
type = "csi"
id = "[[ .my.volume_id ]]"
namespace = "[[ .my.volume_namespace ]]"
name = "[[ .my.volume_id ]]"
plugin_id = "[[ .my.plugin_id ]]"

capability {
access_mode = "multi-node-multi-writer"
attachment_mode = "file-system"
}

capability {
access_mode = "single-node-writer"
attachment_mode = "file-system"
}

capability {
access_mode = "single-node-reader-only"
attachment_mode = "file-system"
}

mount_options {
mount_flags = ["noatime"]
}
22 changes: 22 additions & 0 deletions packs/democratic_csi_nfs/templates/_constraints.tpl
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
[[- define "constraints" -]]
constraint {
attribute = "${attr.kernel.name}"
value = "linux"
}

constraint {
attribute = "${attr.driver.docker.privileged.enabled}"
value = true
}

[[ range $idx, $constraint := .my.constraints ]]
constraint {
attribute = [[ $constraint.attribute | quote ]]
[[- if $constraint.value ]]
value = [[ $constraint.value | quote ]]
[[- end ]]
[[- if $constraint.operator ]]
operator = [[ $constraint.operator | quote ]]
[[- end ]]
}
[[- end ]][[- end ]]
Empty file.
5 changes: 5 additions & 0 deletions packs/democratic_csi_nfs/templates/_location.tpl
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
[[ define "location" ]]
[[ template "region" ]]
[[ template "namespace" ]]
datacenters = [[ .my.datacenters | toJson ]]
[[- end -]]
26 changes: 26 additions & 0 deletions packs/democratic_csi_nfs/templates/_plugin_config_file.tpl
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
[[- define "plugin_config_file" ]]
[[- if not ( all .my.nfs_share_host .my.nfs_share_base_path ) -]]
[[- $u := (list) -]]
[[/* capture .my because `range` changes . to the current item */]]
[[- $my := .my -]]
[[- range list "nfs_share_host" "nfs_share_base_path" -]]
[[- if not ( index $my . ) ]][[ $u = append $u . ]][[- end -]]
[[- end ]]
[[- fail ( join " and " (toStrings $u) | printf "%s must be provided" ) -]]
[[- end ]]
template {
destination = "${NOMAD_TASK_DIR}/driver-config-file.yaml"

data = <<EOH
driver: nfs-client
instance_id:
nfs:
shareHost: [[ .my.nfs_share_host ]]
shareBasePath: "[[ .my.nfs_share_base_path ]]"
controllerBasePath: "/storage"
dirPermissionsMode: "[[ .my.nfs_dir_permissions_mode ]]"
dirPermissionsUser: [[ .my.nfs_dir_permissions_user ]]
dirPermissionsGroup: [[ .my.nfs_dir_permissions_group ]]
EOH
}
[[- end -]]
Empty file.
6 changes: 6 additions & 0 deletions packs/democratic_csi_nfs/templates/_resources.tpl
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
[[- define "resources" ]]
resources {
cpu = [[ .my.resources.cpu ]]
memory = [[ .my.resources.memory ]]
}
[[- end -]]
56 changes: 56 additions & 0 deletions packs/democratic_csi_nfs/templates/controller.nomad.tpl
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
job "[[ .my.job_name ]]_controller" {

[[ template "location" . ]]

group "controller" {

count = [[ .my.controller_count ]]

[[ template "constraints" . ]]

constraint {
operator = "distinct_hosts"
value = "true"
}

task "plugin" {
driver = "docker"

config {
image = "[[ .my.plugin_image ]]"

args = [
"--csi-version=[[ .my.plugin_csi_spec_version ]]",
"--csi-name=[[ .my.plugin_id ]]",
"--driver-config-file=${NOMAD_TASK_DIR}/driver-config-file.yaml",
"--log-level=[[ .my.plugin_log_level ]]",
"--csi-mode=controller",
"--server-socket=${CSI_ENDPOINT}",
]

# normally not required for controller plugins, but NFS
# doesn't have a remote API other than mounting, so this
# plugin has to be able to mount the NFS volume as a
# bind-mount in order to create and snapshot volumes.
privileged = true
mount {
type = "bind"
source = "[[ if not .my.nfs_controller_mount_path ]][[fail "nfs_controller_mount_path must be defined"]][[else]][[.my.nfs_controller_mount_path]][[end]]"
target = "/storage"
readonly = false
}
}

[[ template "plugin_config_file" . ]]

[[ template "resources" . ]]

csi_plugin {
id = "[[ .my.plugin_id ]]"
type = "controller"
mount_dir = "/csi"
}

}
}
}
51 changes: 51 additions & 0 deletions packs/democratic_csi_nfs/templates/node.nomad.tpl
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
job "[[ .my.job_name ]]_node" {

# you can run node plugins as service jobs as well, but this ensures
# that all nodes in the DC have a copy.
type = "system"

[[ template "location" . ]]

group "node" {

[[ template "constraints" . ]]

task "plugin" {
driver = "docker"

env {
CSI_NODE_ID = "${attr.unique.hostname}"
}

config {
image = "[[ .my.plugin_image ]]"

args = [
"--csi-version=[[ .my.plugin_csi_spec_version ]]",
"--csi-name=[[ .my.plugin_id ]]",
"--driver-config-file=${NOMAD_TASK_DIR}/driver-config-file.yaml",
"--log-level=[[ .my.plugin_log_level ]]",
"--csi-mode=node",
"--server-socket=${CSI_ENDPOINT}",
]

# node plugins must run as privileged jobs because they
# mount disks to the host
privileged = true
ipc_mode = "host"
network_mode = "host"
}

[[ template "plugin_config_file" . ]]

[[ template "resources" . ]]

csi_plugin {
id = "[[ .my.plugin_id ]]"
type = "node"
mount_dir = "/csi"
}

}
}
}
Loading