Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rancher v2 Hardening Guide Breaks UI-Kube Terminal #19439

Open
ChrisMcKee opened this issue Apr 8, 2019 · 8 comments
Open

Rancher v2 Hardening Guide Breaks UI-Kube Terminal #19439

ChrisMcKee opened this issue Apr 8, 2019 · 8 comments

Comments

@ChrisMcKee
Copy link

@ChrisMcKee ChrisMcKee commented Apr 8, 2019

RKE Setup

kubernetes_version: v1.13.5-rancher1-2
cluster_name: rancher-k8

bastion_host:
    address: ${bastion_host}
    user: ${bastion_user}
    port: 22
    ssh_key_path: ${bastion_filepath}

nodes:
  - address: ${node_one_ip}
    user: ubuntu
    role: [controlplane, worker, etcd]
    ssh_key_path: ./sshkey
  - address: ${node_two_ip}
    user: ubuntu
    role: [controlplane, worker, etcd]
    ssh_key_path: ./sshkey
  - address: ${node_three_ip}
    user: ubuntu
    role: [controlplane, worker, etcd]
    ssh_key_path: ./sshkey

ignore_docker_version: true

cloud_provider:
    name: aws

network:
    plugin: canal

ingress:
    provider: nginx

services:
  etcd:
    snapshot: true # enables recurring etcd snapshots
    creation: 6h0s # time increment between snapshots
    retention: 168h # time increment before snapshot purge
  kubelet:
    extra_args:
      streaming-connection-idle-timeout: "1800s"
      protect-kernel-defaults: "true"
      make-iptables-util-chains: "true"
      event-qps: "0"
  kube-api:
    pod_security_policy: true
    extra_args:
      anonymous-auth: "false"
      profiling: "false"
      repair-malformed-updates: "false"
      service-account-lookup: "true"
      enable-admission-plugins: "ServiceAccount,NamespaceLifecycle,LimitRanger,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds,AlwaysPullImages,DenyEscalatingExec,NodeRestriction,EventRateLimit,PodSecurityPolicy"
      admission-control-config-file: "/etc/kubernetes/admission.yaml"
      encryption-provider-config: /etc/kubernetes/encryption.yaml
      audit-log-path: "/var/log/kube-audit/audit-log.json"
      audit-log-maxage: "5"
      audit-log-maxbackup: "5"
      audit-log-maxsize: "100"
      audit-log-format: "json"
      audit-policy-file: /etc/kubernetes/audit.yaml
    extra_binds:
      - "/var/log/kube-audit:/var/log/kube-audit"
  scheduler:
    extra_args:
      profiling: "false"
      address: "127.0.0.1"
  kube-controller:
    extra_args:
      profiling: "false"
      address: "127.0.0.1"
      terminated-pod-gc-threshold: "1000"

addons: |
  apiVersion: rbac.authorization.k8s.io/v1
  kind: Role
  metadata:
    name: default-psp-role
    namespace: ingress-nginx
  rules:
  - apiGroups:
    - extensions
    resourceNames:
    - default-psp
    resources:
    - podsecuritypolicies
    verbs:
    - use
  ---
  apiVersion: rbac.authorization.k8s.io/v1
  kind: RoleBinding
  metadata:
    name: default-psp-rolebinding
    namespace: ingress-nginx
  roleRef:
    apiGroup: rbac.authorization.k8s.io
    kind: Role
    name: default-psp-role
  subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: Group
    name: system:serviceaccounts
  - apiGroup: rbac.authorization.k8s.io
    kind: Group
    name: system:authenticated
  ---
  apiVersion: v1
  kind: Namespace
  metadata:
    name: cattle-system
  ---
  apiVersion: rbac.authorization.k8s.io/v1
  kind: Role
  metadata:
    name: default-psp-role
    namespace: cattle-system
  rules:
  - apiGroups:
    - extensions
    resourceNames:
    - default-psp
    resources:
    - podsecuritypolicies
    verbs:
    - use
  ---
  apiVersion: rbac.authorization.k8s.io/v1
  kind: RoleBinding
  metadata:
    name: default-psp-rolebinding
    namespace: cattle-system
  roleRef:
    apiGroup: rbac.authorization.k8s.io
    kind: Role
    name: default-psp-role
  subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: Group
    name: system:serviceaccounts
  - apiGroup: rbac.authorization.k8s.io
    kind: Group
    name: system:authenticated
  ---
  apiVersion: extensions/v1beta1
  kind: PodSecurityPolicy
  metadata:
    name: restricted
  spec:
    requiredDropCapabilities:
    - NET_RAW
    privileged: false
    allowPrivilegeEscalation: false
    defaultAllowPrivilegeEscalation: false
    fsGroup:
      rule: RunAsAny
    runAsUser:
      rule: MustRunAsNonRoot
    seLinux:
      rule: RunAsAny
    supplementalGroups:
      rule: RunAsAny
    volumes:
    - emptyDir
    - secret
    - persistentVolumeClaim
    - downwardAPI
    - configMap
    - projected
  ---
  apiVersion: rbac.authorization.k8s.io/v1
  kind: ClusterRole
  metadata:
    name: psp:restricted
  rules:
  - apiGroups:
    - extensions
    resourceNames:
    - restricted
    resources:
    - podsecuritypolicies
    verbs:
    - use
  ---
  apiVersion: rbac.authorization.k8s.io/v1
  kind: ClusterRoleBinding
  metadata:
    name: psp:restricted
  roleRef:
    apiGroup: rbac.authorization.k8s.io
    kind: ClusterRole
    name: psp:restricted
  subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: Group
    name: system:serviceaccounts
  - apiGroup: rbac.authorization.k8s.io
    kind: Group
    name: system:authenticated

This is the RKE setup required by the CIS benchmark based hardening guide.
https://releases.rancher.com/documents/security/latest/Rancher_Hardening_Guide.pdf

Launching a cluster using this; then the usual helm steps, results in a working rancher installation. (obviously you need to have the encryption file and audit files as per the doc)

Clicking the launch ctl button results in a websockets error (403 Forbidden) from the system.
image

0:190838 WebSocket connection to 'wss://rancher-de.xxxx.xx/v3/clusters/local?shell=true' failed: Error during WebSocket handshake: Unexpected response code: 403

I 'expect' this is intended (in that the ability to run within the web-terminal bypassing other restrictions that may be in place may be deemed a negative resource to give a user in a hardened environment) but currently the UI fails to allow for this.
If this is intentional, it may be best to have a flag on installing rancher via helm to remove this button from the UI, and add to the docs for hardening.

@ChrisMcKee

This comment has been minimized.

Copy link
Author

@ChrisMcKee ChrisMcKee commented Apr 10, 2019

It's literally this step

      enable-admission-plugins: "ServiceAccount,NamespaceLifecycle,LimitRanger,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds,AlwaysPullImages,DenyEscalatingExec,NodeRestriction,EventRateLimit,PodSecurityPolicy"
      admission-control-config-file: "/etc/kubernetes/admission.yaml"

that disables access

@sybnex

This comment has been minimized.

Copy link

@sybnex sybnex commented May 29, 2019

Yeah it is this plugin: DenyEscalatingExec

Because the rancher agent is running in priviledged mode. And as it is forbidden to connect in it (which gives you the terminal) you don't get the terminal.

So remove this flag or ask rancher to create this connection trough another pod without priviliedged mode.

@ChrisMcKee

This comment has been minimized.

Copy link
Author

@ChrisMcKee ChrisMcKee commented May 29, 2019

@sybnex yep; the docs were being turned into webpages last time I asked; this is more so the right notes can be added (expected behaviour) and to see if a ticket needs raising in UI around how the buttons handled :)

@jiaqiluo

This comment has been minimized.

Copy link
Member

@jiaqiluo jiaqiluo commented Sep 16, 2019

Cannot open the kubectl from UI in v2.3.0-rc3 with a different error message

Detailed steps are logged here: #20884 (comment)

@jiaqiluo jiaqiluo self-assigned this Sep 16, 2019
@jiaqiluo jiaqiluo added this to the v2.3 milestone Sep 16, 2019
@ChrisMcKee

This comment has been minimized.

Copy link
Author

@ChrisMcKee ChrisMcKee commented Sep 17, 2019

It woule be nice if, when disabled by policy, the button was greyed out and either a modal or a hover-tip was present on attempting to use it saying that the kubectrl interface is disabled due to security policy. Or words to that effect.

@deniseschannon deniseschannon removed this from the v2.3 milestone Sep 17, 2019
@alena1108 alena1108 mentioned this issue Sep 17, 2019
7 of 7 tasks complete
@aemneina aemneina added the internal label Sep 27, 2019
@deniseschannon deniseschannon added this to the v2.3.2 milestone Oct 1, 2019
@jiaqiluo

This comment has been minimized.

Copy link
Member

@jiaqiluo jiaqiluo commented Oct 22, 2019

Met the same bug in v2.3-head 8659f45 in the hardening cluster mode.

Steps:

  • follow the guide here to use rke to provision a k8s cluster in hardening mode: https://rancher.com/docs/rancher/v2.x/en/security/hardening-2.2/
  • install Rancher:v2.3-head
  • In rancher, enable Github Auth
  • log in as a Github user as a standard user
  • add a custom cluster with PSP enabled and configure both nodes and the edit the cluster as YAML following the guide
  • launch kubectl from UI

Screen Shot 2019-10-22 at 4 12 47 PM

RKE command line tool version: 0.3.1
RKE cluster: v1.13.12-rancher1-1
custom clsuter: v1.15.5-rancher1-2

@mrajashree mrajashree self-assigned this Oct 28, 2019
@mrajashree

This comment has been minimized.

Copy link
Member

@mrajashree mrajashree commented Oct 28, 2019

cattle-node-agent is the pod that is exec’d when Launch kubectl is selected.

Solution 1: If we run this in non-privileged mode “Launch kubectl” in UI should work fine. This issue aims at running it in non-privileged mode.

Solution 2: We should remove DenyEscalatingExec admission plugin since it’ll be deprecated in 1.18 (https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#denyescalatingexec)
Instead we should define a custom PSP to disallow privilege escalation and running of privileged containers. We could continue using the k8s default restricted psp for cluster hardening, but if it takes away too many permissions then we could define a custom psp with less restrictions

@cloudnautique

This comment has been minimized.

Copy link
Member

@cloudnautique cloudnautique commented Dec 24, 2019

We should look to address issues #13612

@cloudnautique cloudnautique removed their assignment Dec 24, 2019
@davidnuzik davidnuzik removed the team/ca label Jan 15, 2020
@deniseschannon deniseschannon removed this from the v2.3 - Backlog milestone Jan 18, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
9 participants
You can’t perform that action at this time.