Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add baremetal capability annotations #249

Merged
merged 1 commit into from Mar 25, 2022

Conversation

honza
Copy link
Member

@honza honza commented Mar 3, 2022

The capability has already been registered with the API.

@openshift-ci openshift-ci bot requested review from andfasano and markmc March 3, 2022 19:51
@honza
Copy link
Member Author

honza commented Mar 4, 2022

/retest

@sadasu
Copy link
Contributor

sadasu commented Mar 9, 2022

/retest

1 similar comment
@honza
Copy link
Member Author

honza commented Mar 10, 2022

/retest

@sadasu
Copy link
Contributor

sadasu commented Mar 10, 2022

Does the BMH CRD also need this annonation? That would mean that we have to add this capability annotation to https://github.com/openshift/baremetal-operator/blob/master/config/crd/ocp/ocp_kustomization.yaml too.

@honza
Copy link
Member Author

honza commented Mar 10, 2022

Or should it be an annotation on the container in CBO?

@wking
Copy link
Member

wking commented Mar 11, 2022

In case it helps, recent nightly:

$ oc adm release extract --to manifests registry.ci.openshift.org/ocp/release:4.11.0-0.nightly-2022-03-09-235248
Extracted release payload from digest sha256:05d922e0df6b70ec6170602c4ae374b04a94dd0fe552ea668a20782c2afe5158 created at 2022-03-09T23:56:42Z
$ ls manifests/*metal*
manifests/0000_30_baremetal-operator_01_baremetalhost.crd.yaml
manifests/0000_31_cluster-baremetal-operator_01_images.configmap.yaml
manifests/0000_31_cluster-baremetal-operator_01_trusted-ca-configmap.yaml
manifests/0000_31_cluster-baremetal-operator_02_metal3provisioning.crd.yaml
manifests/0000_31_cluster-baremetal-operator_03_service.yaml
manifests/0000_31_cluster-baremetal-operator_03_webhookservice.yaml
manifests/0000_31_cluster-baremetal-operator_04_serviceaccount.yaml
manifests/0000_31_cluster-baremetal-operator_05_kube-rbac-proxy-config.yaml
manifests/0000_31_cluster-baremetal-operator_05_prometheus_rbac.yaml
manifests/0000_31_cluster-baremetal-operator_05_rbac.yaml
manifests/0000_31_cluster-baremetal-operator_06_deployment.yaml
manifests/0000_31_cluster-baremetal-operator_07_clusteroperator.cr.yaml
manifests/0000_90_cluster-baremetal-operator_03_servicemonitor.yaml

Unclear to me why the BareMetalHost CRD is at level 30. Looks like that's coming from here. Perhaps it should be adjusted to use a manifests/0000_31_cluster-baremetal-operator_... prefix? But all of those manifests look like they could use capability.openshift.io/name: baremetal to me.

@sadasu
Copy link
Contributor

sadasu commented Mar 11, 2022

Or should it be an annotation on the container in CBO?

@honza This capability annotation is only required on resources managed by CVO, afaik. The BMO container, which is part of the metal3 deployment is a resource managed by CBO and hence doesn't need this annotation.

We still need to add the capability annotation to the the BMH CRD because that is a manifest that is generated here : https://github.com/openshift/baremetal-operator/blob/master/Dockerfile.ocp and installed by CVO. So, adding the capability annotation to https://github.com/openshift/baremetal-operator/blob/master/config/crd/ocp/ocp_kustomization.yaml would make sure the generated manifest would have it.

@sadasu
Copy link
Contributor

sadasu commented Mar 11, 2022

In case it helps, recent nightly:

$ oc adm release extract --to manifests registry.ci.openshift.org/ocp/release:4.11.0-0.nightly-2022-03-09-235248
Extracted release payload from digest sha256:05d922e0df6b70ec6170602c4ae374b04a94dd0fe552ea668a20782c2afe5158 created at 2022-03-09T23:56:42Z
$ ls manifests/*metal*
manifests/0000_30_baremetal-operator_01_baremetalhost.crd.yaml
manifests/0000_31_cluster-baremetal-operator_01_images.configmap.yaml
manifests/0000_31_cluster-baremetal-operator_01_trusted-ca-configmap.yaml
manifests/0000_31_cluster-baremetal-operator_02_metal3provisioning.crd.yaml
manifests/0000_31_cluster-baremetal-operator_03_service.yaml
manifests/0000_31_cluster-baremetal-operator_03_webhookservice.yaml
manifests/0000_31_cluster-baremetal-operator_04_serviceaccount.yaml
manifests/0000_31_cluster-baremetal-operator_05_kube-rbac-proxy-config.yaml
manifests/0000_31_cluster-baremetal-operator_05_prometheus_rbac.yaml
manifests/0000_31_cluster-baremetal-operator_05_rbac.yaml
manifests/0000_31_cluster-baremetal-operator_06_deployment.yaml
manifests/0000_31_cluster-baremetal-operator_07_clusteroperator.cr.yaml
manifests/0000_90_cluster-baremetal-operator_03_servicemonitor.yaml

Unclear to me why the BareMetalHost CRD is at level 30. Looks like that's coming from here. Perhaps it should be adjusted to use a manifests/0000_31_cluster-baremetal-operator_... prefix? But all of those manifests look like they could use capability.openshift.io/name: baremetal to me.

@wking Your list is correct and that is what we are working towards. We wanted to make sure manifests/0000_30_baremetal-operator_01_baremetalhost.crd.yaml is installed before the rest of the CBO manifests hence the level 30. The manifest is generated by openshift/baremetal-operator (https://github.com/openshift/baremetal-operator/blob/master/Dockerfile.ocp#L11) as you correctly pointed out hence the name of the manifest has "baremetal-operator" and not "cluster-baremetal-operator". The rest of the manifests in that list are provided by the cluster-baremetal-operator.

@honza
Copy link
Member Author

honza commented Mar 11, 2022

We still need to add the capability annotation to the the BMH CRD because that is a manifest that is generated here : https://github.com/openshift/baremetal-operator/blob/master/Dockerfile.ocp and installed by CVO. So, adding the capability annotation to https://github.com/openshift/baremetal-operator/blob/master/config/crd/ocp/ocp_kustomization.yaml would make sure the generated manifest would have it.

Oh, yes, I remember now!

openshift/baremetal-operator#212

@sadasu
Copy link
Contributor

sadasu commented Mar 11, 2022

We still need to add the capability annotation to the the BMH CRD because that is a manifest that is generated here : https://github.com/openshift/baremetal-operator/blob/master/Dockerfile.ocp and installed by CVO. So, adding the capability annotation to https://github.com/openshift/baremetal-operator/blob/master/config/crd/ocp/ocp_kustomization.yaml would make sure the generated manifest would have it.

Oh, yes, I remember now!

openshift/baremetal-operator#212

In openshift/baremetal-operator, the annotation is added with "baremetal" (https://github.com/openshift/baremetal-operator/pull/212/files) and here in CBO it is without " ". Which one should it be?

@honza
Copy link
Member Author

honza commented Mar 11, 2022

Fixed by removing the quotes. But I can't imagine that it would matter?

@wking
Copy link
Member

wking commented Mar 11, 2022

We wanted to make sure manifests/0000_30_baremetal-operator_01_baremetalhost.crd.yaml is installed before the rest of the CBO manifests hence the level 30.

The CVO collects manifests into task nodes, which share the same level/name prefix. So you currently have two task-nodes, one of 30_baremetal-operator and one for 31_cluster-baremetal-operator. During updates, level 30 task nodes will be processed before level 31 task nodes. But during install and reconciliation we flatten the task-node graph, so levels don't matter. However, regardless of the CVO mode, we always order manifests within a task node. So moving the CRD to 0000_31_cluster-baremetal-operator_00_baremetalhost.crd.yaml or something similar that begins with 0000_31_cluster-baremetal-operator_ and sorts before the other 0000_31_cluster-baremetal-operator_... entries, would ensure the CRD comes first in all CVO modes, while your current level 30 only gets it coming first in update mode.

@sadasu
Copy link
Contributor

sadasu commented Mar 14, 2022

/retest

5 similar comments
@honza
Copy link
Member Author

honza commented Mar 15, 2022

/retest

@honza
Copy link
Member Author

honza commented Mar 16, 2022

/retest

@honza
Copy link
Member Author

honza commented Mar 18, 2022

/retest

@wking
Copy link
Member

wking commented Mar 23, 2022

/retest

@honza
Copy link
Member Author

honza commented Mar 23, 2022

/retest

@honza
Copy link
Member Author

honza commented Mar 23, 2022

It seems to me that the errors in e2e-metal-ipi-ovn-ipv6 aren't related to the changes in this PR. Given that the other jobs passed on the first try, I think we're safe to override here.

/override ci/prow/e2e-metal-ipi-ovn-ipv6

@sadasu
Copy link
Contributor

sadasu commented Mar 23, 2022

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Mar 23, 2022
@dtantsur
Copy link
Member

/approve

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 24, 2022

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: dtantsur, honza

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Mar 24, 2022
@openshift-bot
Copy link

/retest-required

Please review the full test history for this PR and help us cut down flakes.

@honza
Copy link
Member Author

honza commented Mar 24, 2022

/override ci/prow/e2e-metal-ipi-ovn-ipv6

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 24, 2022

@honza: Overrode contexts on behalf of honza: ci/prow/e2e-metal-ipi-ovn-ipv6

In response to this:

/override ci/prow/e2e-metal-ipi-ovn-ipv6

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-bot
Copy link

/retest-required

Please review the full test history for this PR and help us cut down flakes.

3 similar comments
@openshift-bot
Copy link

/retest-required

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link

/retest-required

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link

/retest-required

Please review the full test history for this PR and help us cut down flakes.

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 25, 2022

@honza: all tests passed!

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@openshift-merge-robot openshift-merge-robot merged commit efa06ca into openshift:master Mar 25, 2022
@honza
Copy link
Member Author

honza commented Mar 28, 2022

The CVO collects manifests into task nodes, which share the same level/name prefix. So you currently have two task-nodes, one of 30_baremetal-operator and one for 31_cluster-baremetal-operator. During updates, level 30 task nodes will be processed before level 31 task nodes. But during install and reconciliation we flatten the task-node graph, so levels don't matter. However, regardless of the CVO mode, we always order manifests within a task node. So moving the CRD to 0000_31_cluster-baremetal-operator_00_baremetalhost.crd.yaml or something similar that begins with 0000_31_cluster-baremetal-operator_ and sorts before the other 0000_31_cluster-baremetal-operator_... entries, would ensure the CRD comes first in all CVO modes, while your current level 30 only gets it coming first in update mode.

Thanks for the explanation. I opened a follow-up PR: openshift/baremetal-operator#216

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants