Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Release-1.28] - Local path provisioner disallowed from reading Pods logs #9936

Closed
brandond opened this issue Apr 11, 2024 · 1 comment
Closed
Assignees
Milestone

Comments

@brandond
Copy link
Contributor

Backport fix for Local path provisioner disallowed from reading Pods logs

@VestigeJ
Copy link

##Environment Details
Reproduced using VERSION=v1.28.8+k3s1
Validated using COMMIT=feb211d3ce0c41ee8d02dfc9164bb9c7dd97533c

Infrastructure

  • Cloud

Node(s) CPU architecture, OS, and version:

Linux 5.14.21-150500.53-default x86_64 GNU/Linux
PRETTY_NAME="SUSE Linux Enterprise Server 15 SP5"

Cluster Configuration:

NAME               STATUS   ROLES                       AGE     VERSION
ip-12-13-12-16     Ready    control-plane,etcd,master   3h35m   v1.28.8+k3s-feb211d3

Config.yaml:

node-external-ip: 12.13.12.16
token: YOUR_TOKEN_HERE
write-kubeconfig-mode: 644
debug: true
profile: cis
protect-kernel-defaults: true
cluster-init: true
embedded-registry: true

Reproduction

$ curl https://get.k3s.io --output install-"k3s".sh
$ sudo chmod +x install-"k3s".sh
$ sudo groupadd --system etcd && sudo useradd -s /sbin/nologin --system -g etcd etcd
$ sudo modprobe ip_vs_rr
$ sudo modprobe ip_vs_wrr
$ sudo modprobe ip_vs_sh
$ sudo printf "on_oovm.panic_on_oom=0 \nvm.overcommit_memory=1 \nkernel.panic=10 \nkernel.panic_ps=1 \nkernel.panic_on_oops=1 \n" > ~/90-kubelet.conf
$ sudo cp 90-kubelet.conf /etc/sysctl.d/
$ sudo systemctl restart systemd-sysctl
$ COMMIT=81cd630f87ba3c0c720862af4cd02850303083a5
$ sudo INSTALL_K3S_COMMIT=$COMMIT INSTALL_K3S_EXEC=server ./install-k3s.sh
$ set_kubefig
$ vim pv-test.yaml
$ vim pod-test.yaml
$ k get deploy -n kube-system local-path-provisioner -o jsonpath='{$.spec.template.spec.containers[:1].image}'
$ k apply -f pvc-test.yaml
$ k apply -f pod-test.yaml
$ kgp -A -o wide
$ k delete -f pod-test.yaml -f pvc-test.yaml
$ kg pv -A
$ k logs pod/local-path-provisioner-6c86858495-rxk4q -n kube-system
$ k logs pod/local-path-provisioner-6c86858495-rxk4q -n kube-system
$ kg clusterrole local-path-provisioner-role -o yaml

Results:

//both new COMMIT_IDs and existing release retain the same versions of local-path-provisioner
$ k get deploy -n kube-system local-path-provisioner -o jsonpath='{$.spec.template.spec.containers[:1].image}'

rancher/local-path-provisioner:v0.0.26

I also hit and was able to reproduce this issue #9833

$ kg pv -A

NAME            CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM              STORAGECLASS   REASON   AGE
checking-path   5Gi        RWO            Recycle          Failed   default/test-pvc   local-path              50m

// existing release clusterrole resource permissions attention to missing resources: pod/logs

$ kg clusterrole local-path-provisioner-role -o yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    objectset.rio.cattle.io/applied: H4sIAAAAAA
    objectset.rio.cattle.io/id: ""
    objectset.rio.cattle.io/owner-gvk: k3s.cattle.io/v1, Kind=Addon
    objectset.rio.cattle.io/owner-name: local-storage
    objectset.rio.cattle.io/owner-namespace: kube-system
  creationTimestamp: "2024-04-15T18:49:24Z"
  labels:
    objectset.rio.cattle.io/hash: 183f35c65ffbc3064603f43f1580d8c68a2dabd4
  name: local-path-provisioner-role
  resourceVersion: "272"
  uid: 6c447fa9-505f-43f3-b3d7-fa289476146f
rules:
- apiGroups:
  - ""
  resources:
  - nodes
  - persistentvolumeclaims
  - configmaps
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - endpoints
  - persistentvolumes
  - pods
  verbs:
  - '*'
- apiGroups:
  - ""
  resources:
  - events
  verbs:
  - create
  - patch
- apiGroups:
  - storage.k8s.io
  resources:
  - storageclasses
  verbs:
  - get
  - list
  - watch

// latest commit install now includes the pods/log resources to the clusterrole

$ kg clusterrole local-path-provisioner-role -o yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    objectset.rio.cattle.io/applied: H4sIAAAAAAAYDAAA
    objectset.rio.cattle.io/id: ""
    objectset.rio.cattle.io/owner-gvk: k3s.cattle.io/v1, Kind=Addon
    objectset.rio.cattle.io/owner-name: local-storage
    objectset.rio.cattle.io/owner-namespace: kube-system
  creationTimestamp: "2024-04-15T18:42:20Z"
  labels:
    objectset.rio.cattle.io/hash: 183f35c65ffbc3064603f43f1580d8c68a2dabd4
  name: local-path-provisioner-role
  resourceVersion: "276"
  uid: 36473c84-d8e1-441b-b10e-fcae60d91b63
rules:
- apiGroups:
  - ""
  resources:
  - nodes
  - persistentvolumeclaims
  - configmaps
  - pods/log
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - endpoints
  - persistentvolumes
  - pods
  verbs:
  - '*'
- apiGroups:
  - ""
  resources:
  - events
  verbs:
  - create
  - patch
- apiGroups:
  - storage.k8s.io
  resources:
  - storageclasses
  verbs:
  - get
  - list
  - watch

I did not hit the error during reproduction in the pod logs for what it's worth. But as the change is a permissions change on the clusterrole it's pretty straightforward to check if it has the right permissions from the kubectl api.

$ k logs pod/local-path-provisioner-6c86858495-rxk4q -n kube-system

I0415 18:42:37.898685       1 controller.go:811] Starting provisioner controller rancher.io/local-path_local-path-provisioner-6c86858495-rxk4q_37323b8f-6556-440b-8e62-a51f6da89a19!
I0415 18:42:37.999507       1 controller.go:860] Started provisioner controller rancher.io/local-path_local-path-provisioner-6c86858495-rxk4q_37323b8f-6556-440b-8e62-a51f6da89a19!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Done Issue
Development

No branches or pull requests

2 participants