Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Default service account getting access to previledge PSP access #85791

Open
mailarka opened this issue Dec 2, 2019 · 3 comments
Assignees

Comments

@mailarka
Copy link

@mailarka mailarka commented Dec 2, 2019

What happened:
I am new to pod security policy and trying to impement basic PSP to my K8s cluster. I have enabled the pod security policy by passing following to apiserver(via admin.config):
enable-admission-plugins: "NodeRestriction,PodSecurityPolicy,ServiceAccount,NamespaceLifecycle"

Also I have created 2 policy(restricted and privileged) as below:

apiVersion: extensions/v1beta1
kind: PodSecurityPolicy
metadata:
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: 'docker/default'
    seccomp.security.alpha.kubernetes.io/defaultProfileName:  'docker/default'
  name: restricted
spec:
  privileged: false
  allowPrivilegeEscalation: false
  volumes:
    - 'configMap'
    - 'emptyDir'
    - 'projected'
    - 'secret'
    - 'downwardAPI'
    - 'persistentVolumeClaim'
  hostNetwork: false
  hostIPC: false
  hostPID: false
  runAsUser:
    rule: 'MustRunAsNonRoot'
  seLinux:
    rule: 'RunAsAny'
  supplementalGroups:
    rule: 'MustRunAs'
    ranges:
      - min: 1
        max: 65535
  fsGroup:
    rule: 'MustRunAs'
    ranges:
      - min: 1
        max: 65535
  readOnlyRootFilesystem: false

---
apiVersion: extensions/v1beta1
kind: PodSecurityPolicy
metadata:
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
  name: privileged
spec:
  allowedCapabilities:
  - '*'
  allowPrivilegeEscalation: true
  fsGroup:
    rule: 'RunAsAny'
  hostIPC: true
  hostNetwork: true
  hostPID: true
  hostPorts:
  - min: 0
    max: 65535
  privileged: true
  readOnlyRootFilesystem: false
  runAsUser:
    rule: 'RunAsAny'
  seLinux:
    rule: 'RunAsAny'
  supplementalGroups:
    rule: 'RunAsAny'
  volumes:
  - '*'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: default-psp
rules:
- apiGroups:
  - policy
  resourceNames:
  - restricted
  resources:
  - podsecuritypolicies
  verbs:
  - use

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: privileged-psp
rules:
- apiGroups:
  - policy
  resourceNames:
  - privileged
  resources:
  - podsecuritypolicies
  verbs:
  - use

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: default-psp
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: default-psp
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:authenticated

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: kube-system-psp
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: privileged-psp
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:nodes
- kind: Group
  name: system:serviceaccounts:kube-system
  namespace: kube-system

When I checked the "default" service account on any namespace, the default service account was able to use the privileged pod security policy:

kubectl auth can-i --as system:serviceaccount:default:default use podsecuritypolicy/priviledged
Warning: resource 'podsecuritypolicies' is not namespace scoped in group 'extensions'
yes

Because of this any new helm chart which doesn't link with any service account is able to get privileged permission in the cluster.
I want to understand why default service account is able to get privileged policy and how can I restrict it.

What you expected to happen:
All service accounts should be restricted except kube-system service accounts
How to reproduce it (as minimally and precisely as possible):
The yamls are mentioned above
Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version):
    kubectl version
    Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:18:23Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:09:08Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
  • Cloud provider or hardware configuration:
    CenOS based personal VM
  • OS (e.g: cat /etc/os-release):
    CentOS
  • Kernel (e.g. uname -a):
  • Install tools:
  • Network plugin and version (if this is a network-related bug):
  • Others:
@liggitt

This comment has been minimized.

Copy link
Member

@liggitt liggitt commented Dec 2, 2019

Are you running kubectl auth can-i against the secured port (does adding --v=6 show an https or http URL)? All authorization checks against the unsecured port return true.

One observation is that by granting privileged access to all service accounts in the kube-system namespace, all pod-creating controller loops will have privileged access, and any user that creates a deployment/replicaset/etc will be able to make the pod-creating controller create a privileged pod on their behalf.

@mailarka

This comment has been minimized.

Copy link
Author

@mailarka mailarka commented Dec 3, 2019

Thanks @liggitt for the reply. Yes I am using secured port.

kubectl auth can-i --as system:serviceaccount:kube-system:replicaset-controller use podsecuritypolicy/priviledged
Warning: resource 'podsecuritypolicies' is not namespace scoped in group 'extensions'
no
kubectl auth can-i --as system:serviceaccount:kube-system:replication-controller use podsecuritypolicy/priviledged
Warning: resource 'podsecuritypolicies' is not namespace scoped in group 'extensions'
no
kubectl auth can-i --as system:serviceaccount:kube-system:service-account-controller use podsecuritypolicy/priviledged
Warning: resource 'podsecuritypolicies' is not namespace scoped in group 'extensions'
no
kubectl auth can-i --as system:serviceaccount:kube-system:deployment-controller use podsecuritypolicy/priviledged     Warning: resource 'podsecuritypolicies' is not namespace scoped in group 'extensions'
no
kubectl auth can-i --as system:serviceaccount:kube-system:default use podsecuritypolicy/priviledged
Warning: resource 'podsecuritypolicies' is not namespace scoped in group 'extensions'
no

As suggested by you I have modified the ClusterRoleBinding for kube-system namespace to grant permission to specific serviceAccount. Now my psp.yaml shows like below:

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: psp:privileged
roleRef:
  kind: ClusterRole
  name: psp:privileged
  apiGroup: rbac.authorization.k8s.io
subjects:
# For the kubeadm kube-system nodes
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:nodes
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: psp:calico
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: psp:privileged
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
  name: calico-kube-controllers
  namespace: kube-system
- kind: ServiceAccount
  name: calico-node
  namespace: kube-system
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
- kind: ServiceAccount
  name: kube-proxy
  namespace: kube-system

I am also using Helm for deploying charts in any namespace and using RBAC for the helm as:

Client: &version.Version{SemVer:"v2.15.0", GitCommit:"c2440264ca6c078a06e088a838b0476d2fc14750", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.15.0", GitCommit:"c2440264ca6c078a06e088a838b0476d2fc14750", GitTreeState:"clean"}
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system
kubectl auth can-i --as system:serviceaccount:kube-system:tiller use podsecuritypolicy/priviledged
Warning: resource 'podsecuritypolicies' is not namespace scoped in group 'extensions'
yes

I am deploying tiller in cluster scope(single tiller pod in kube-system namespace for the cluster). Now I suspect that linking cluster-admin to the tiller service account is causing the security hole.
So now I am experimenting the RBAC for helm and let you know if I succeed.

But I am still not clear how tiller is able to break the security (and giving privileged policy to any service) even if I link restricted psp policy with all service account.

@liggitt liggitt self-assigned this Dec 3, 2019
@liggitt

This comment has been minimized.

Copy link
Member

@liggitt liggitt commented Dec 3, 2019

Your can-i command has a typo in the PSP name (priviledged in the command vs privileged in the role), which would seem to indicate the service account in question has permissions on all PSPs.

Now I suspect that linking cluster-admin to the tiller service account is causing the security hole.

Yes, granting anything cluster-admin rights allows it to use any PSP (cluster-admin == superuser == allow any verb on any resource)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
3 participants
You can’t perform that action at this time.