-
Notifications
You must be signed in to change notification settings - Fork 96
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BUILD-284: integrate shared resources operator #198
BUILD-284: integrate shared resources operator #198
Conversation
assets/csidriveroperators/shared-resource/07_role_shared_resource_config.yaml
Outdated
Show resolved
Hide resolved
pkg/operator/csidriveroperator/csioperatorclient/shared-resource.go
Outdated
Show resolved
Hide resolved
pkg/operator/csidriveroperator/csioperatorclient/shared-resource.go
Outdated
Show resolved
Hide resolved
@@ -0,0 +1,269 @@ | |||
apiVersion: rbac.authorization.k8s.io/v1 | |||
kind: ClusterRole |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does the CSI driver need access to some CR for the driver? I don't see its RBAC here. In addition, who is creating the CRD? It could be CVO (then put the CRD into /manifests/
) or it could be your operator (then the operator needs permissions to create/update CRDs).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah at the moment our various CI flows are passing because https://github.com/openshift/csi-driver-shared-resource/blob/master/Makefile#L54-L57 is creating the CRDs.
Along the "CVO path", @coreydaley has openshift/openshift-apiserver#244 up, which I think would result in the CVO handling it via how all the other CVO operator based CRDs get applied to the cluster.
If I am correct, then once that merged we could remove that make crd
Makefile target in the driver repo.
If I am incorrect, then yes, we would need to add permissions here to create the new CRDs, given the CSO is not creating any CRDs at the moment, and go from there.
@coreydaley am I missing something based on perhaps you recent conversations with the apiserver team?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
granted I just realized the timestamp of @jsafrane comment here is over 2 months old, so perhaps everyone has since been sorted out on this :-)
Out of curiosity, do you aim at OCP 4.9? We're past feature freeze there and this looks like a feature. |
@jsafrane this now being staged for when 4.10 is available. We missed ART's cutoff for adding things to the payload. |
/retest |
3 similar comments
/retest |
/retest |
/retest |
/assign @jsafrane @adambkaplan @gabemontero |
/retest |
other than the discussion point from back in August about how the CRDs get installed on the cluster, nothing jumped out at me; that said, I'm more comfortable with @jsafrane applying the labels for merge, as I suspect the boiler plate type aspects of this PR are very CSO specific Just to confirm, I did see the elements about applying our new drivers on all the clouds / is "cloud neutral" and what @adambkaplan introduced here with https://github.com/openshift/cluster-storage-operator/pull/197/files |
From e2e-vsphere-csi job that sets
Sorry, I forgot to mention we use special annotation
See https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/openshift_cluster-storage-operator/198/pull-ci-openshift-cluster-storage-operator-master-e2e-vsphere-csi/1450116529319317504 and The go code itself looks good. |
/hold cancel |
/retest |
/retest
|
@jsafrane ptal, the e2e-vsphere-csi job is passing now, |
/retest |
2 similar comments
/retest |
/retest |
This PR itself looks good, a new CSI driver is installed when TechPreviewNoUpgrade is present. One blocking question/confirmation: do you have ART build of csi-driver-shared-resource-operator and csi-driver-shared-resource images? We cannot merge this PR until the images are built, otherwise it breaks nightly ART builds and blocks whole QA. Non-blocking issues that need to be solved in subsequent PRs / another repos:
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: coreydaley, jsafrane The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
I was assured ART builds are present. |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
2 similar comments
/retest-required Please review the full test history for this PR and help us cut down flakes. |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
@coreydaley: The following test failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
/retest-required |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
5 similar comments
/retest-required Please review the full test history for this PR and help us cut down flakes. |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
https://issues.redhat.com/browse/BUILD-284