-
Notifications
You must be signed in to change notification settings - Fork 250
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add ServiceAccount to csi-nfs-node DaemonSet #334
Add ServiceAccount to csi-nfs-node DaemonSet #334
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: johnsimcall The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Welcome @johnsimcall! |
Hi @johnsimcall. Thanks for your PR. I'm waiting for a kubernetes-csi member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@@ -19,6 +19,7 @@ spec: | |||
spec: | |||
hostNetwork: true # original nfs connection would be broken without hostNetwork setting | |||
dnsPolicy: Default # available values: Default, ClusterFirstWithHostNet, ClusterFirst | |||
serviceAccountName: csi-nfs-controller-sa |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think the driver on the node needs any access? what's the blocking issue now?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Without the ServiceAccount, OpenShift refuses to create the DaemonSet/csi-nfs-node Pods. My complete installation notes and the error message are:
oc new-project nfs-csi
oc adm policy add-scc-to-user privileged system:serviceaccount:nfs-csi:csi-nfs-controller-sa
sed -i.backup 's/kube-system/nfs-csi/g' ./deploy/v4.0.0/rbac-csi-nfs-controller.yaml
sed -i.backup 's/kube-system/nfs-csi/g' ./deploy/v4.0.0/csi-nfs-node.yaml
sed -i.backup 's/kube-system/nfs-csi/g' ./deploy/v4.0.0/csi-nfs-controller.yaml
# ADD 'spec.template.spec.serviceAccountName: csi-nfs-controller-sa'
vi deploy/csi-nfs-node.yaml
./deploy/install-driver.sh v4.0.0 local
# ADD 'parameters:...'
vi deploy/example/storageclass-nfs.yaml
oc create -f deploy/example/storageclass-nfs.yaml
LAST SEEN TYPE REASON OBJECT MESSAGE
35s Warning FailedCreate daemonset/csi-nfs-node Error creating: pods "csi-nfs-node-" is forbidden: unable to validate against any security context constraint: [provider "anyuid": Forbidden: not usable by user or serviceaccount, provider restricted: .spec.securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used, spec.volumes[0]: Invalid value: "hostPath": hostPath volumes are not allowed to be used, spec.volumes[1]: Invalid value: "hostPath": hostPath volumes are not allowed to be used, spec.volumes[2]: Invalid value: "hostPath": hostPath volumes are not allowed to be used, spec.containers[0].securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used, spec.containers[0].securityContext.containers[2].hostPort: Invalid value: 29653: Host ports are not allowed to be used, spec.containers[1].securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used, spec.containers[1].securityContext.containers[2].hostPort: Invalid value: 29653: Host ports are not allowed to be used, spec.containers[2].securityContext.privileged: Invalid value: true: Privileged containers are not allowed, spec.containers[2].securityContext.capabilities.add: Invalid value: "SYS_ADMIN": capability may not be added, spec.containers[2].securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used, spec.containers[2].securityContext.containers[2].hostPort: Invalid value: 29653: Host ports are not allowed to be used, provider "nonroot": Forbidden: not usable by user or serviceaccount, provider "hostmount-anyuid": Forbidden: not usable by user or serviceaccount, provider "machine-api-termination-handler": Forbidden: not usable by user or serviceaccount, provider "hostnetwork": Forbidden: not usable by user or serviceaccount, provider "hostaccess": Forbidden: not usable by user or serviceaccount, provider "node-exporter": Forbidden: not usable by user or serviceaccount, provider "privileged": Forbidden: not usable by user or serviceaccount]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, maybe we need an empty serviceAccount for node daemonset
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
btw, what's the k8s version you are running? is this only required on OpenShift?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm using OpenShift version 4.10 which is Kubernetes Version: v1.23.5+9ce5071
I should have been clear and mentioned that after "add-scc-to-user" and adding the ServiceAccount to the DaemonSet, everything works wonderfully! Thank you! I even created a StorageClass and am able to dynamically provision directories (PVs) on my external NFS server (a RHEL8 host)!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have added an empty serviceAccount by this PR: #335, could you verify it works on OpenShift? use csi-nfs-controller-sa
on driver daemonset is giving too much privilege
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
addressed by this PR: #335, could you check whether the master branch works well on OpenShift.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@andyzhangx I was able to test the updated master branch. I found that OpenShift's default restricted
SCC that gets applied to the new csi-nfs-node-sa
ServiceAccount still prevents the DaemonSet pods from running.
I see that the nfs
containers from the controller Deployment and the DaemonSet pods ask for very generous securityContext options. You said that "driver daemonset is giving too much privilege," but this would require reducing the privileges requested.
Thank you for looking at this!
In the mean time, I created a custom SCC, ClusterRole, and ClusterRoleBindings like this:
---
apiVersion: security.openshift.io/v1
kind: SecurityContextConstraints
metadata:
name: csi-nfs-scc
allowHostDirVolumePlugin: true
allowHostNetwork: true
allowHostPorts: true
allowPrivilegedContainer: true
allowPrivilegeEscalation: true
allowedCapabilities:
- SYS_ADMIN
runAsUser:
type: RunAsAny
seLinuxContext:
type: RunAsAny
–--
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:openshift:scc:csi-nfs-scc
rules:
- apiGroups:
- security.openshift.io
resourceNames:
- csi-nfs-scc
resources:
- securitycontextconstraints
verbs:
- use
–--
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:openshift:scc:csi-nfs-scc
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:openshift:scc:csi-nfs-scc
subjects:
- kind: ServiceAccount
name: csi-nfs-node-sa
namespace: nfs-csi
- kind: ServiceAccount
name: csi-nfs-controller-sa
namespace: nfs-csi
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so what's the working one for reducing securityContext
options?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't tried to reduce the securityContext
options in the Deployment or DaemonSet pods yet. I will need to examine the container images when I get some more time...
Pull Request Test Coverage Report for Build 2299203312
💛 - Coveralls |
What type of PR is this?
/kind bug
What this PR does / why we need it:
Add a ServiceAccount to the DaemonSet
Special notes for your reviewer:
This is my first PR request. I found this while deploying to Red Hat's OpenShift.