Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

operator: Drop -operator from ClusterOperator object name #376

Merged
merged 1 commit into from
Feb 5, 2019

Conversation

LorbusChris
Copy link
Member

for consistency

@openshift-ci-robot
Copy link
Contributor

Hi @LorbusChris. Thanks for your PR.

I'm waiting for a openshift member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci-robot openshift-ci-robot added needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. labels Feb 4, 2019
@openshift-ci-robot
Copy link
Contributor

@LorbusChris: Cannot trigger testing until a trusted user reviews the PR and leaves an /ok-to-test message.

In response to this:

for consistency

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@runcom
Copy link
Member

runcom commented Feb 4, 2019

/ok-to-test

@openshift-ci-robot openshift-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Feb 4, 2019
@runcom
Copy link
Member

runcom commented Feb 4, 2019

This has been pointed out by Clayton on slack for context

@cgwalters
Copy link
Member

I see how the CLI name is passed into where we create the clusteroperator CRD, but I'd be a bit surprised if this was the only place that needed changing.

We can try it though; looks like the kubelet unit tests are failing which is odd.

@cgwalters
Copy link
Member

/approve

@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Feb 4, 2019
@ashcrow
Copy link
Member

ashcrow commented Feb 4, 2019

/test unit

@cgwalters
Copy link
Member

In https://storage.googleapis.com/origin-ci-test/pr-logs/pull/openshift_machine-config-operator/376/pull-ci-openshift-machine-config-operator-master-e2e-aws/1453/artifacts/e2e-aws/clusteroperators.json
looks like this worked:

Specifically
name: machine-config


        {
            "apiVersion": "config.openshift.io/v1",
            "kind": "ClusterOperator",
            "metadata": {
                "creationTimestamp": "2019-02-04T13:09:26Z",
                "generation": 1,
                "name": "machine-config",
                "resourceVersion": "107412",
                "selfLink": "/apis/config.openshift.io/v1/clusteroperators/machine-config",
                "uid": "197aadc6-287e-11e9-ac52-0a5673673970"
            },
            "spec": {},
            "status": {
                "conditions": [
                    {
                        "lastTransitionTime": "2019-02-04T13:09:27Z",
                        "status": "False",
                        "type": "Available"
                    },
                    {
                        "lastTransitionTime": "2019-02-04T13:09:27Z",
                        "message": "Progressing towards 3.11.0-560-g98e68e4d-dirty",
                        "status": "True",
                        "type": "Progressing"
                    },
                    {
                        "lastTransitionTime": "2019-02-04T13:15:04Z",
                        "message": "Failed when progressing towards 3.11.0-560-g98e68e4d-dirty because: error syncing: timed out waiting for the condition during syncRequiredMachineConfigPools: error pool master is not ready. status: (total: 3, updated: 0, unavailable: 1)",
                        "reason": "error syncing: timed out waiting for the condition during syncRequiredMachineConfigPools: error pool master is not ready. status: (total: 3, updated: 0, unavailable: 1)",
                        "status": "True",
                        "type": "Failing"
                    }
                ],
                "extension": {
                    "master": "pool is degraded because of 1 nodes are reporting degraded status on update. Cannot proceed.",
                    "worker": "all 3 nodes are at latest configuration worker-9d24ef31aee206cdc24d97394dec0443"
                },
                "relatedObjects": null,
                "versions": null
            }
        },

Also note that we're degraded presumably due to #367 - the fact that we haven't been gating any PRs or release payloads on nodes not being degraded has been really problematic - but it's unrelated to this PR. I just noticed that there's a convenient dump of the clusteroperator status.

@cgwalters
Copy link
Member

/lgtm

3 similar comments
@runcom
Copy link
Member

runcom commented Feb 4, 2019

/lgtm

@kikisdeliveryservice
Copy link
Contributor

/lgtm

@ashcrow
Copy link
Member

ashcrow commented Feb 4, 2019

/lgtm

@openshift-ci-robot
Copy link
Contributor

@kikisdeliveryservice: changing LGTM is restricted to assignees, and assigning you to the PR failed.

In response to this:

/lgtm

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci-robot
Copy link
Contributor

@ashcrow: changing LGTM is restricted to assignees, and assigning you to the PR failed.

In response to this:

/lgtm

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@cgwalters
Copy link
Member

/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Feb 4, 2019
@openshift-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: ashcrow, cgwalters, kikisdeliveryservice, LorbusChris, runcom

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

1 similar comment
@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-merge-robot openshift-merge-robot merged commit 0c8ab91 into openshift:master Feb 5, 2019
@LorbusChris LorbusChris deleted the patch-1 branch February 24, 2020 05:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

8 participants