Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

strategic patch: "unrecognized type" error not informative enough #73692

Closed
2 of 4 tasks
cben opened this issue Feb 4, 2019 · 22 comments
Closed
2 of 4 tasks

strategic patch: "unrecognized type" error not informative enough #73692

cben opened this issue Feb 4, 2019 · 22 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery.

Comments

@cben
Copy link

cben commented Feb 4, 2019

What happened:
Given a wrong type, e.g. string where int was expected in a large yaml, oc apply as well as the underlying kubectl patch --type=strategic gives an error that's hard to act upon (if patch is large).

$ cluster/kubectl.sh patch --type=strategic -p '{"spec": {"replicas": 1}}'  deployment/sise
deployment.extensions/sise patched (no change)
$ cluster/kubectl.sh patch --type=strategic -p '{"spec": {"replicas": "1"}}'  deployment/sise
Error from server: unrecognized type: int32
  1. The wording sounds like the problem was an unexpected integer, but actually it means it expected integer but got something else. Worse, there are 2 code paths (FromUnstructured, ToUnstructured) giving exactly same error for different directions!
  2. No indication where in the patch the problem was. In above example the patch is tiny but using kubectl apply it's easy to get lost in a huge patch. apply gives more info, but the error on last line is still same error from patch:
$ oc apply -f deployment.yaml
deployment.apps/sise configured
$ oc apply -f deployment-string.yaml
Error from server: error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{\"deployment.kubernetes.io/revision\":\"1\"},\"creationTimestamp\":null,\"generation\":1,\"labels\":{\"run\":\"sise\"},\"name\":\"sise\",\"namespace\":\"default\"},\"spec\":{\"replicas\":\"1\",\"selector\":{\"matchLabels\":{\"run\":\"sise\"}},\"strategy\":{\"rollingUpdate\":{\"maxSurge\":1,\"maxUnavailable\":1},\"type\":\"RollingUpdate\"},\"template\":{\"metadata\":{\"creationTimestamp\":null,\"labels\":{\"run\":\"sise\"}},\"spec\":{\"containers\":[{\"image\":\"mhausenblas/simpleservice:0.5.0\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"sise\",\"ports\":[{\"containerPort\":9876,\"protocol\":\"TCP\"}],\"resources\":{},\"terminationMessagePath\":\"/dev/termination-log\",\"terminationMessagePolicy\":\"File\"}],\"dnsPolicy\":\"ClusterFirst\",\"restartPolicy\":\"Always\",\"schedulerName\":\"default-scheduler\",\"securityContext\":{},\"terminationGracePeriodSeconds\":30}}}}\n"},"creationTimestamp":null},"spec":{"replicas":"1"}}
to:
Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
Name: "sise", Namespace: "default"
Object: &{map["apiVersion":"apps/v1" "metadata":map["name":"sise" "namespace":"default" "selfLink":"/apis/apps/v1/namespaces/default/deployments/sise" "creationTimestamp":"2019-02-04T10:57:10Z" "annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{\"deployment.kubernetes.io/revision\":\"1\"},\"creationTimestamp\":null,\"generation\":1,\"labels\":{\"run\":\"sise\"},\"name\":\"sise\",\"namespace\":\"default\"},\"spec\":{\"replicas\":1,\"selector\":{\"matchLabels\":{\"run\":\"sise\"}},\"strategy\":{\"rollingUpdate\":{\"maxSurge\":1,\"maxUnavailable\":1},\"type\":\"RollingUpdate\"},\"template\":{\"metadata\":{\"creationTimestamp\":null,\"labels\":{\"run\":\"sise\"}},\"spec\":{\"containers\":[{\"image\":\"mhausenblas/simpleservice:0.5.0\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"sise\",\"ports\":[{\"containerPort\":9876,\"protocol\":\"TCP\"}],\"resources\":{},\"terminationMessagePath\":\"/dev/termination-log\",\"terminationMessagePolicy\":\"File\"}],\"dnsPolicy\":\"ClusterFirst\",\"restartPolicy\":\"Always\",\"schedulerName\":\"default-scheduler\",\"securityContext\":{},\"terminationGracePeriodSeconds\":30}}}}\n"] "uid":"9ee94b73-286b-11e9-9a60-68f728fac3ab" "resourceVersion":"424" "generation":'\x01' "labels":map["run":"sise"]] "spec":map["replicas":'\x01' "selector":map["matchLabels":map["run":"sise"]] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["run":"sise"]] "spec":map["terminationGracePeriodSeconds":'\x1e' "dnsPolicy":"ClusterFirst" "securityContext":map[] "schedulerName":"default-scheduler" "containers":[map["name":"sise" "image":"mhausenblas/simpleservice:0.5.0" "ports":[map["containerPort":'\u2694' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File" "imagePullPolicy":"IfNotPresent"]] "restartPolicy":"Always"]] "strategy":map["type":"RollingUpdate" "rollingUpdate":map["maxUnavailable":'\x01' "maxSurge":'\x01']] "revisionHistoryLimit":'\n' "progressDeadlineSeconds":'\u0258'] "status":map["replicas":'\x01' "updatedReplicas":'\x01' "readyReplicas":'\x01' "availableReplicas":'\x01' "conditions":[map["lastTransitionTime":"2019-02-04T10:57:10Z" "reason":"MinimumReplicasAvailable" "message":"Deployment has minimum availability." "type":"Available" "status":"True" "lastUpdateTime":"2019-02-04T10:57:10Z"] map["type":"Progressing" "status":"True" "lastUpdateTime":"2019-02-04T10:57:12Z" "lastTransitionTime":"2019-02-04T10:57:10Z" "reason":"NewReplicaSetAvailable" "message":"ReplicaSet \"sise-5fc86787d8\" has successfully progressed."]] "observedGeneration":'\x01'] "kind":"Deployment"]}
for: "deployment-string.yaml": unrecognized type: int32

What you expected to happen:

  • tell me I gave string "1" but expected an int.
  • ideally, tell me problem was in spec.replicas.

How to reproduce it (as minimally and precisely as possible):
https://gist.github.com/cben/9bbb982fb8fcf3d88c2c875d04e3a42c

  1. kubectl apply -f deployment.yaml
  2. kubectl patch --type=strategic -p '{"spec": {"replicas": "1"}}' deployment/sise
  3. kubectl apply -f deployment-string.yaml

Anything else we need to know?:
Originally I experienced this on OpenShift 3.11, where the UX is even worse: first create/apply is in some cases tolerant and accepts string instead of int (at least for containerPort), while subsequent apply/patch rejects it! But on upstream k8s master I see errors from first create/apply too so not relevant here.

Other patch formats use a different code path, giving a very informative error:

  • JSON Merge Patch

    $ cluster/kubectl.sh patch --type=merge -p '{"spec": {"replicas": 1}}'  deployment/sise
    deployment.extensions/sise patched
    $ cluster/kubectl.sh patch --type=merge -p '{"spec": {"replicas": "1"}}'  deployment/sise
    Error from server: v1beta1.Deployment.Spec: v1beta1.DeploymentSpec.Replicas: readUint32: unexpected character: �, error found in #10 byte of ...|eplicas":"1","revisi|..., bigger context ...|"spec":{"progressDeadlineSeconds":600,"replicas":"1","revisionHistoryLimit":10,"selector":{"matchLab|...
  • JSON Patch

    $ cluster/kubectl.sh patch --type=json -p '[{"op": "replace", "path": "/spec/replicas", "value": 1}]'  deployment/sise
    deployment.extensions/sise patched (no change)
    $ cluster/kubectl.sh patch --type=json -p '[{"op": "replace", "path": "/spec/replicas", "value": "1"}]'  deployment/sise
    Error from server: v1beta1.Deployment.Spec: v1beta1.DeploymentSpec.Replicas: readUint32: unexpected character: �, error found in #10 byte of ...|eplicas":"1","revisi|..., bigger context ...|"spec":{"progressDeadlineSeconds":600,"replicas":"1","revisionHistoryLimit":10,"selector":{"matchLab|...

kubectl edit also sometimes shows this error, see #26050 (comment)

Environment:

  • Kubernetes version (use kubectl version): built from master today:
    Client Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.0-alpha.2.230+cdfb9126d334ee-dirty", GitCommit:"cdfb9126d334eea722e34f3a895904bb152d53f0", GitTreeState:"dirty", BuildDate:"2019-02-04T10:49:37Z", GoVersion:"go1.11.4", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.0-alpha.2.230+cdfb9126d334ee-dirty", GitCommit:"cdfb9126d334eea722e34f3a895904bb152d53f0", GitTreeState:"dirty", BuildDate:"2019-02-04T10:49:37Z", GoVersion:"go1.11.4", Compiler:"gc", Platform:"linux/amd64"}
    
  • Cloud provider or hardware configuration: ThinkPad T450s laptop
  • OS (e.g. from /etc/os-release): Fedora 29
  • Kernel (e.g. uname -a): Linux 4.19.15-300.fc29.x86_64 Unit test coverage in Kubelet is lousy. (~30%) #1 SMP Mon Jan 14 16:32:35 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
@cben cben added the kind/bug Categorizes issue or PR as related to a bug. label Feb 4, 2019
@k8s-ci-robot k8s-ci-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Feb 4, 2019
@cben
Copy link
Author

cben commented Feb 4, 2019

Working on a patch => #73695

@cben cben changed the title patch: "unrecognized type" error not informative enough strategic patch: "unrecognized type" error not informative enough Feb 4, 2019
@cben
Copy link
Author

cben commented Feb 4, 2019

@liggitt liggitt added kind/feature Categorizes issue or PR as related to a new feature. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. and removed kind/bug Categorizes issue or PR as related to a bug. labels Feb 5, 2019
@k8s-ci-robot k8s-ci-robot removed the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Feb 5, 2019
@fsniper
Copy link

fsniper commented Mar 13, 2019

I also hit this with kube-graffiti while trying to patch a deployment/v1beta1:

My json-patch:

[{ "op": "replace", "path": "/spec/replicas", "value": "3" }]

Error I get:

2019-03-13T10:42:48Z |ERRO| failed to patch object component=existing error="v1beta1.Deployment.Spec: v1beta1.DeploymentSpec.Replicas: readUint32: unexpected character: �, error found in #10 byte of ...|eplicas\":\"3\",\"revisi|..., bigger context ...|\"spec\":{\"progressDeadlineSeconds\":600,\"replicas\":\"3\",\"revisionHistoryLimit\":0,\"selector\":{\"matchLabe|..." group-version=extensions/v1beta1 kind=Deployment name=kube-apiserver namespace=shoot--test--backuptest rule=kube-api-changes-backuptest

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 11, 2019
@cben
Copy link
Author

cben commented Jun 11, 2019

/remove-lifecycle stale

I need to address review feedback on my PR.

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 11, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 9, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 9, 2019
@dr460neye
Copy link

Not a single more detail about whats producing the issue at appliance time.
So I still guess this could be improved and would like to set it back to the backlog

Error from server: error when applying patch: {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"name\":\"logstash\",\"namespace\":\"default\"},\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"k8s-app\":\"logstash\"}},\"spec\":{\"containers\":[{\"command\":[\"logstash\"],\"env\":[{\"name\":\"XPACK_MANAGEMENT_ELASTICSEARCH_USERNAME\",\"value\":\"logstash_internal\"},{\"name\":\"XPACK_MONITORING_ELASTICSEARCH_URL\",\"value\":\"['https://192.168.1.13:9200','https://192.168.1.14:9200','https://192.168.1.15:9200']\"},{\"name\":\"XPACK_MONITORING_ELASTICSEARCH_USERNAME\",\"value\":\"logstash_internal\"},{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"XPACK_MANAGEMENT_PIPELINE_ID\",\"value\":\"main\"},{\"name\":\"XPACK_MANAGEMENT_ELASTICSEARCH_URL\",\"value\":\"['https://192.168.1.13:9200','https://192.168.1.14:9200','https://192.168.1.15:9200']\"},{\"name\":\"XPACK_MANAGEMENT_ENABLED\",\"value\":true},{\"name\":\"XPACK_MONITORING_ENABLED\",\"value\":true},{\"name\":\"XPACK_MANAGEMENT_ELASTICSEARCH_PASSWORD\",\"valueFrom\":{\"secretKeyRef\":{\"key\":\"logstash_internal_password\",\"name\":\"logstash\"}}},{\"name\":\"XPACK_MONITORING_ELASTICSEARCH_PASSWORD\",\"valueFrom\":{\"secretKeyRef\":{\"key\":\"logstash_internal_password\",\"name\":\"logstash\"}}}],\"image\":\"docker.elastic.co/logstash/logstash:6.7.2\",\"name\":\"logstash\",\"ports\":[{\"containerPort\":5044,\"name\":\"logstash\"}],\"volumeMounts\":[{\"mountPath\":\"/usr/share/logstash/config/\",\"name\":\"logstash-config\"},{\"mountPath\":\"/usr/share/logstash/certificate/\",\"name\":\"certificate\"},{\"mountPath\":\"/usr/share/logstash/patterns/\",\"name\":\"patterns\"},{\"mountPath\":\"/usr/share/logstash/pipeline/\",\"name\":\"main-pipeline-config\"}]}],\"volumes\":[{\"configMap\":{\"name\":\"logstash-config\"},\"name\":\"logstash-config\"},{\"configMap\":{\"name\":\"generalca\"},\"name\":\"certificate\"},{\"configMap\":{\"name\":\"patterns\"},\"name\":\"patterns\"}]}}}}\n"}},"spec":{"template":{"spec":{"$setElementOrder/containers":[{"name":"logstash"}],"$setElementOrder/volumes":[{"name":"logstash-config"},{"name":"certificate"},{"name":"patterns"}],"containers":[{"env":[{"name":"XPACK_MANAGEMENT_ELASTICSEARCH_USERNAME","value":"logstash_internal"},{"name":"XPACK_MONITORING_ELASTICSEARCH_URL","value":"['https://192.168.1.13:9200','https://192.168.1.14:9200','https://192.168.1.15:9200']"},{"name":"XPACK_MONITORING_ELASTICSEARCH_USERNAME","value":"logstash_internal"},{"name":"NODE_NAME","valueFrom":{"fieldRef":{"fieldPath":"status.hostIP"}}},{"name":"XPACK_MANAGEMENT_PIPELINE_ID","value":"main"},{"name":"XPACK_MANAGEMENT_ELASTICSEARCH_URL","value":"['https://192.168.1.13:9200','https://192.168.1.14:9200','https://192.168.1.15:9200']"},{"name":"XPACK_MANAGEMENT_ENABLED","value":true},{"name":"XPACK_MONITORING_ENABLED","value":true},{"name":"XPACK_MANAGEMENT_ELASTICSEARCH_PASSWORD","valueFrom":{"secretKeyRef":{"key":"logstash_internal_password","name":"logstash"}}},{"name":"XPACK_MONITORING_ELASTICSEARCH_PASSWORD","valueFrom":{"secretKeyRef":{"key":"logstash_internal_password","name":"logstash"}}}],"name":"logstash"}],"volumes":[{"$patch":"delete","name":"main-pipeline-config"}]}}}} to: Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment" Name: "logstash", Namespace: "default" Object: &{map["apiVersion":"extensions/v1beta1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"name\":\"logstash\",\"namespace\":\"default\"},\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"k8s-app\":\"logstash\"}},\"spec\":{\"containers\":[{\"command\":[\"logstash\"],\"image\":\"docker.elastic.co/logstash/logstash:6.7.2\",\"name\":\"logstash\",\"ports\":[{\"containerPort\":5044,\"name\":\"logstash\"}],\"volumeMounts\":[{\"mountPath\":\"/usr/share/logstash/config/\",\"name\":\"logstash-config\"},{\"mountPath\":\"/usr/share/logstash/certificate/\",\"name\":\"certificate\"},{\"mountPath\":\"/usr/share/logstash/patterns/\",\"name\":\"patterns\"},{\"mountPath\":\"/usr/share/logstash/pipeline/\",\"name\":\"main-pipeline-config\"}]}],\"volumes\":[{\"configMap\":{\"name\":\"logstash-config\"},\"name\":\"logstash-config\"},{\"configMap\":{\"name\":\"generalca\"},\"name\":\"certificate\"},{\"configMap\":{\"name\":\"patterns\"},\"name\":\"patterns\"},{\"configMap\":{\"name\":\"main-pipeline\"},\"name\":\"main-pipeline-config\"}]}}}}\n"] "creationTimestamp":"2019-10-29T14:58:12Z" "generation":'\x01' "labels":map["k8s-app":"logstash"] "name":"logstash" "namespace":"default" "resourceVersion":"29682023" "selfLink":"/apis/extensions/v1beta1/namespaces/default/deployments/logstash" "uid":"8781131e-fa5c-11e9-a67d-02e51885437d"] "spec":map["progressDeadlineSeconds":%!q(int64=+2147483647) "replicas":'\x01' "revisionHistoryLimit":%!q(int64=+2147483647) "selector":map["matchLabels":map["k8s-app":"logstash"]] "strategy":map["rollingUpdate":map["maxSurge":'\x01' "maxUnavailable":'\x01'] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["k8s-app":"logstash"]] "spec":map["containers":[map["command":["logstash"] "image":"docker.elastic.co/logstash/logstash:6.7.2" "imagePullPolicy":"IfNotPresent" "name":"logstash" "ports":[map["containerPort":'\u13b4' "name":"logstash" "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File" "volumeMounts":[map["mountPath":"/usr/share/logstash/config/" "name":"logstash-config"] map["mountPath":"/usr/share/logstash/certificate/" "name":"certificate"] map["mountPath":"/usr/share/logstash/patterns/" "name":"patterns"] map["mountPath":"/usr/share/logstash/pipeline/" "name":"main-pipeline-config"]]]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\x1e' "volumes":[map["configMap":map["defaultMode":'\u01a4' "name":"logstash-config"] "name":"logstash-config"] map["configMap":map["defaultMode":'\u01a4' "name":"generalca"] "name":"certificate"] map["configMap":map["defaultMode":'\u01a4' "name":"patterns"] "name":"patterns"] map["configMap":map["defaultMode":'\u01a4' "name":"main-pipeline"] "name":"main-pipeline-config"]]]]] "status":map["availableReplicas":'\x01' "conditions":[map["lastTransitionTime":"2019-10-29T14:58:12Z" "lastUpdateTime":"2019-10-29T14:58:12Z" "message":"Deployment has minimum availability." "reason":"MinimumReplicasAvailable" "status":"True" "type":"Available"]] "observedGeneration":'\x01' "readyReplicas":'\x01' "replicas":'\x01' "updatedReplicas":'\x01']]} for: "logstash-deployment.yaml": unrecognized type: string

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Oct 31, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 29, 2020
@andreas-eberle
Copy link

@cben: Any news on this?

@cben
Copy link
Author

cben commented Feb 17, 2020

Thanks for the reminder. I need to rebase and address feedback but keep not getting to it. At the moment I'm sick. If anyone wants to take over, go ahead.

@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 18, 2020
@cben
Copy link
Author

cben commented Mar 19, 2020 via email

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Mar 19, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 17, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 17, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@Skitionek
Copy link

Problem persists
/reopen

@k8s-ci-robot
Copy link
Contributor

@Skitionek: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

Problem persists
/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@cben
Copy link
Author

cben commented Nov 1, 2020

/reopen

@bduffany
Copy link

@cben looks like the CI robot ignored you

@githoober
Copy link

The issue is still not fixed. After so many years.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

10 participants