Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

external: fix the run as a user flag #13383

Merged
merged 1 commit into from Dec 13, 2023

Conversation

parth-gr
Copy link
Member

there is a hardcoded value of "client.healthchecker" which needs to be replaced by self.run_as_user

Checklist:

  • Commit Message Formatting: Commit titles and messages follow guidelines in the developer guide.
  • Reviewed the developer guide on Submitting a Pull Request
  • Pending release notes updated with breaking and/or notable changes for the next minor release.
  • Documentation has been updated, if necessary.
  • Unit tests have been added, if necessary.
  • Integration tests have been added, if necessary.

Copy link
Contributor

@subhamkrai subhamkrai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, please share the testing output. Thanks

deploy/examples/create-external-cluster-resources.py Outdated Show resolved Hide resolved
there is a hardcoded value of "client.healthchecker"
which needs to be replaced by self.run_as_user

Signed-off-by: parth-gr <partharora1010@gmail.com>
@parth-gr
Copy link
Member Author

parth-gr commented Dec 13, 2023

sh-4.4$ python3 a.py --rbd-data-pool-name=replicapool  --run-as-user client.ocphealthcheck.ocp-lab-cluster-1
[{"name": "rook-ceph-mon-endpoints", "kind": "ConfigMap", "data": {"data": "a=10.98.66.95:6789", "maxMonId": "0", "mapping": "{}"}}, {"name": "rook-ceph-mon", "kind": "Secret", "data": {"admin-secret": "admin-secret", "fsid": "c0aa3258-ff25-4d6f-a03b-f2040de48a02", "mon-secret": "mon-secret"}}, {"name": "rook-ceph-operator-creds", "kind": "Secret", "data": {"userID": "client.ocphealthcheck.ocp-lab-cluster-1", "userKey": "AQDZcHll78RTMRAA9Ol/BvHpIGN27G/Im6Cy0w=="}}, {"name": "monitoring-endpoint", "kind": "CephCluster", "data": {"MonitoringEndpoint": "10.244.254.18", "MonitoringPort": "9283"}}, {"name": "rook-csi-rbd-node", "kind": "Secret", "data": {"userID": "csi-rbd-node", "userKey": "AQBfcHllzFYSJhAA4jbRWw0wIHOE+N1A1jBuQg=="}}, {"name": "rook-csi-rbd-provisioner", "kind": "Secret", "data": {"userID": "csi-rbd-provisioner", "userKey": "AQBfcHll5jd4GxAAgDvHMCTvErMeKXSRmAR1VQ=="}}, {"name": "rook-csi-cephfs-provisioner", "kind": "Secret", "data": {"adminID": "csi-cephfs-provisioner", "adminKey": "AQBfcHllixr5MBAAqvJ3SDJ1I9LlC3CHm9ng8Q=="}}, {"name": "rook-csi-cephfs-node", "kind": "Secret", "data": {"adminID": "csi-cephfs-node", "adminKey": "AQBgcHllCBUzABAA8fWruV7u23UG8lZzfMhaFg=="}}, {"name": "rook-ceph-dashboard-link", "kind": "Secret", "data": {"userID": "ceph-dashboard-link", "userKey": "http://10.244.254.18:7000/"}}, {"name": "ceph-rbd", "kind": "StorageClass", "data": {"pool": "replicapool", "csi.storage.k8s.io/provisioner-secret-name": "rook-csi-rbd-provisioner", "csi.storage.k8s.io/controller-expand-secret-name": "rook-csi-rbd-provisioner", "csi.storage.k8s.io/node-stage-secret-name": "rook-csi-rbd-node"}}, {"name": "cephfs", "kind": "StorageClass", "data": {"fsName": "myfs", "pool": "myfs-replicated", "csi.storage.k8s.io/provisioner-secret-name": "rook-csi-cephfs-provisioner", "csi.storage.k8s.io/controller-expand-secret-name": "rook-csi-cephfs-provisioner", "csi.storage.k8s.io/node-stage-secret-name": "rook-csi-cephfs-node"}}]
client.ocphealthcheck.ocp-lab-cluster-1
        key: AQDZcHll78RTMRAA9Ol/BvHpIGN27G/Im6Cy0w==
        caps: [mgr] allow command config
        caps: [mon] allow r, allow command quorum_status, allow command version
        caps: [osd] profile rbd-read-only, allow rwx pool=default.rgw.meta, allow r pool=.rgw.root, allow rw pool=default.rgw.control, allow rx pool=default.rgw.log, allow x pool=default.rgw.buckets.index

Running script without the run as a user flag

client.healthchecker
        key: AQDZcHll78RTMRAA9Ol/BvHpIGN27G/Im6Cy0w==
        caps: [mgr] allow command config
        caps: [mon] allow r, allow command quorum_status, allow command version
        caps: [osd] profile rbd-read-only, allow rwx pool=default.rgw.meta, allow r pool=.rgw.root, allow rw pool=default.rgw.control, allow rx pool=default.rgw.log, allow x pool=default.rgw.buckets.index

@subhamkrai
Copy link
Contributor

sh-4.4$ python3 a.py --rbd-data-pool-name=replicapool  --run-as-user client.ocphealthcheck.ocp-lab-cluster-1
[{"name": "rook-ceph-mon-endpoints", "kind": "ConfigMap", "data": {"data": "a=10.98.66.95:6789", "maxMonId": "0", "mapping": "{}"}}, {"name": "rook-ceph-mon", "kind": "Secret", "data": {"admin-secret": "admin-secret", "fsid": "c0aa3258-ff25-4d6f-a03b-f2040de48a02", "mon-secret": "mon-secret"}}, {"name": "rook-ceph-operator-creds", "kind": "Secret", "data": {"userID": "client.ocphealthcheck.ocp-lab-cluster-1", "userKey": "AQDZcHll78RTMRAA9Ol/BvHpIGN27G/Im6Cy0w=="}}, {"name": "monitoring-endpoint", "kind": "CephCluster", "data": {"MonitoringEndpoint": "10.244.254.18", "MonitoringPort": "9283"}}, {"name": "rook-csi-rbd-node", "kind": "Secret", "data": {"userID": "csi-rbd-node", "userKey": "AQBfcHllzFYSJhAA4jbRWw0wIHOE+N1A1jBuQg=="}}, {"name": "rook-csi-rbd-provisioner", "kind": "Secret", "data": {"userID": "csi-rbd-provisioner", "userKey": "AQBfcHll5jd4GxAAgDvHMCTvErMeKXSRmAR1VQ=="}}, {"name": "rook-csi-cephfs-provisioner", "kind": "Secret", "data": {"adminID": "csi-cephfs-provisioner", "adminKey": "AQBfcHllixr5MBAAqvJ3SDJ1I9LlC3CHm9ng8Q=="}}, {"name": "rook-csi-cephfs-node", "kind": "Secret", "data": {"adminID": "csi-cephfs-node", "adminKey": "AQBgcHllCBUzABAA8fWruV7u23UG8lZzfMhaFg=="}}, {"name": "rook-ceph-dashboard-link", "kind": "Secret", "data": {"userID": "ceph-dashboard-link", "userKey": "http://10.244.254.18:7000/"}}, {"name": "ceph-rbd", "kind": "StorageClass", "data": {"pool": "replicapool", "csi.storage.k8s.io/provisioner-secret-name": "rook-csi-rbd-provisioner", "csi.storage.k8s.io/controller-expand-secret-name": "rook-csi-rbd-provisioner", "csi.storage.k8s.io/node-stage-secret-name": "rook-csi-rbd-node"}}, {"name": "cephfs", "kind": "StorageClass", "data": {"fsName": "myfs", "pool": "myfs-replicated", "csi.storage.k8s.io/provisioner-secret-name": "rook-csi-cephfs-provisioner", "csi.storage.k8s.io/controller-expand-secret-name": "rook-csi-cephfs-provisioner", "csi.storage.k8s.io/node-stage-secret-name": "rook-csi-cephfs-node"}}]
client.ocphealthcheck.ocp-lab-cluster-1
        key: AQDZcHll78RTMRAA9Ol/BvHpIGN27G/Im6Cy0w==
        caps: [mgr] allow command config
        caps: [mon] allow r, allow command quorum_status, allow command version
        caps: [osd] profile rbd-read-only, allow rwx pool=default.rgw.meta, allow r pool=.rgw.root, allow rw pool=default.rgw.control, allow rx pool=default.rgw.log, allow x pool=default.rgw.buckets.index

Running script without the run as a user flag

client.ocphealthcheck.ocp-lab-cluster-1
        key: AQDZcHll78RTMRAA9Ol/BvHpIGN27G/Im6Cy0w==
        caps: [mgr] allow command config
        caps: [mon] allow r, allow command quorum_status, allow command version
        caps: [osd] profile rbd-read-only, allow rwx pool=default.rgw.meta, allow r pool=.rgw.root, allow rw pool=default.rgw.control, allow rx pool=default.rgw.log, allow x pool=default.rgw.buckets.index

If I read the output correctly,

both time user is client.ocphealthcheck.ocp-lab-cluster-1 when --run-as-user is not passed then it should be back to default client.healthchecker ?

@parth-gr
Copy link
Member Author

sh-4.4$ python3 a.py --rbd-data-pool-name=replicapool  --run-as-user client.ocphealthcheck.ocp-lab-cluster-1
[{"name": "rook-ceph-mon-endpoints", "kind": "ConfigMap", "data": {"data": "a=10.98.66.95:6789", "maxMonId": "0", "mapping": "{}"}}, {"name": "rook-ceph-mon", "kind": "Secret", "data": {"admin-secret": "admin-secret", "fsid": "c0aa3258-ff25-4d6f-a03b-f2040de48a02", "mon-secret": "mon-secret"}}, {"name": "rook-ceph-operator-creds", "kind": "Secret", "data": {"userID": "client.ocphealthcheck.ocp-lab-cluster-1", "userKey": "AQDZcHll78RTMRAA9Ol/BvHpIGN27G/Im6Cy0w=="}}, {"name": "monitoring-endpoint", "kind": "CephCluster", "data": {"MonitoringEndpoint": "10.244.254.18", "MonitoringPort": "9283"}}, {"name": "rook-csi-rbd-node", "kind": "Secret", "data": {"userID": "csi-rbd-node", "userKey": "AQBfcHllzFYSJhAA4jbRWw0wIHOE+N1A1jBuQg=="}}, {"name": "rook-csi-rbd-provisioner", "kind": "Secret", "data": {"userID": "csi-rbd-provisioner", "userKey": "AQBfcHll5jd4GxAAgDvHMCTvErMeKXSRmAR1VQ=="}}, {"name": "rook-csi-cephfs-provisioner", "kind": "Secret", "data": {"adminID": "csi-cephfs-provisioner", "adminKey": "AQBfcHllixr5MBAAqvJ3SDJ1I9LlC3CHm9ng8Q=="}}, {"name": "rook-csi-cephfs-node", "kind": "Secret", "data": {"adminID": "csi-cephfs-node", "adminKey": "AQBgcHllCBUzABAA8fWruV7u23UG8lZzfMhaFg=="}}, {"name": "rook-ceph-dashboard-link", "kind": "Secret", "data": {"userID": "ceph-dashboard-link", "userKey": "http://10.244.254.18:7000/"}}, {"name": "ceph-rbd", "kind": "StorageClass", "data": {"pool": "replicapool", "csi.storage.k8s.io/provisioner-secret-name": "rook-csi-rbd-provisioner", "csi.storage.k8s.io/controller-expand-secret-name": "rook-csi-rbd-provisioner", "csi.storage.k8s.io/node-stage-secret-name": "rook-csi-rbd-node"}}, {"name": "cephfs", "kind": "StorageClass", "data": {"fsName": "myfs", "pool": "myfs-replicated", "csi.storage.k8s.io/provisioner-secret-name": "rook-csi-cephfs-provisioner", "csi.storage.k8s.io/controller-expand-secret-name": "rook-csi-cephfs-provisioner", "csi.storage.k8s.io/node-stage-secret-name": "rook-csi-cephfs-node"}}]
client.ocphealthcheck.ocp-lab-cluster-1
        key: AQDZcHll78RTMRAA9Ol/BvHpIGN27G/Im6Cy0w==
        caps: [mgr] allow command config
        caps: [mon] allow r, allow command quorum_status, allow command version
        caps: [osd] profile rbd-read-only, allow rwx pool=default.rgw.meta, allow r pool=.rgw.root, allow rw pool=default.rgw.control, allow rx pool=default.rgw.log, allow x pool=default.rgw.buckets.index

Running script without the run as a user flag

client.ocphealthcheck.ocp-lab-cluster-1
        key: AQDZcHll78RTMRAA9Ol/BvHpIGN27G/Im6Cy0w==
        caps: [mgr] allow command config
        caps: [mon] allow r, allow command quorum_status, allow command version
        caps: [osd] profile rbd-read-only, allow rwx pool=default.rgw.meta, allow r pool=.rgw.root, allow rw pool=default.rgw.control, allow rx pool=default.rgw.log, allow x pool=default.rgw.buckets.index

If I read the output correctly,

both time user is client.ocphealthcheck.ocp-lab-cluster-1 when --run-as-user is not passed then it should be back to default client.healthchecker ?

yess I copied the wrong secret, updating manually now

Copy link
Contributor

@subhamkrai subhamkrai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@travisn travisn merged commit 1e9c160 into rook:master Dec 13, 2023
49 of 51 checks passed
travisn added a commit that referenced this pull request Dec 13, 2023
external: fix the run as a user flag (backport #13383)
travisn added a commit that referenced this pull request Dec 13, 2023
external: fix the run as a user flag (backport #13383)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants