New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

proto: wrong wireType = 0 for field NominatedNodeName after upgrading to v1.10 #69628

Closed
harryge00 opened this Issue Oct 10, 2018 · 4 comments

Comments

Projects
None yet
3 participants
@harryge00
Contributor

harryge00 commented Oct 10, 2018

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug

/kind feature

What happened:
I have upgraded two Kubernetes cluster, one from v1.6 to v1.10, another from v1.9 to v1.10. Both of their kube-apiservers report status.go:64] apiserver received an error that is not an metav1.Status: proto: wrong wireType = 0 for field NominatedNodeName when listwatch pods.

What you expected to happen:
Kube-apiserver should report no error when listwatch pods.

How to reproduce it (as minimally and precisely as possible):

  • Set up a K8s cluster v1.6 or v1.9
  • Stop k8s master
  • Replace old kube-apiserver with new v1.10 kube-apiserver
  • Start kube-apiserver

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version): c8454fb
  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release): debian
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:
@k8s-ci-robot

This comment has been minimized.

Show comment
Hide comment
@k8s-ci-robot

k8s-ci-robot Oct 10, 2018

Contributor

@harryge00: There are no sig labels on this issue. Please add a sig label by either:

  1. mentioning a sig: @kubernetes/sig-<group-name>-<group-suffix>
    e.g., @kubernetes/sig-contributor-experience-<group-suffix> to notify the contributor experience sig, OR

  2. specifying the label manually: /sig <group-name>
    e.g., /sig scalability to apply the sig/scalability label

Note: Method 1 will trigger an email to the group. See the group list.
The <group-suffix> in method 1 has to be replaced with one of these: bugs, feature-requests, pr-reviews, test-failures, proposals.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Contributor

k8s-ci-robot commented Oct 10, 2018

@harryge00: There are no sig labels on this issue. Please add a sig label by either:

  1. mentioning a sig: @kubernetes/sig-<group-name>-<group-suffix>
    e.g., @kubernetes/sig-contributor-experience-<group-suffix> to notify the contributor experience sig, OR

  2. specifying the label manually: /sig <group-name>
    e.g., /sig scalability to apply the sig/scalability label

Note: Method 1 will trigger an email to the group. See the group list.
The <group-suffix> in method 1 has to be replaced with one of these: bugs, feature-requests, pr-reviews, test-failures, proposals.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@harryge00

This comment has been minimized.

Show comment
Hide comment
@harryge00

harryge00 Oct 10, 2018

Contributor

I guess this issue is related to #58990
@bsalamat

Contributor

harryge00 commented Oct 10, 2018

I guess this issue is related to #58990
@bsalamat

@liggitt

This comment has been minimized.

Show comment
Hide comment
@liggitt

liggitt Oct 12, 2018

Member

That would indicate there is data in etcd using proto tag 11 that does not match the nominatedNodeName type. Can you provide the full output of kubectl version (both at the 1.6 or 1.9 levels, and at the 1.10 level) so we can determine the build this was seen on?

Member

liggitt commented Oct 12, 2018

That would indicate there is data in etcd using proto tag 11 that does not match the nominatedNodeName type. Can you provide the full output of kubectl version (both at the 1.6 or 1.9 levels, and at the 1.10 level) so we can determine the build this was seen on?

@harryge00

This comment has been minimized.

Show comment
Hide comment
@harryge00

harryge00 Oct 13, 2018

Contributor

Turned out to be a bug in my fork of Kubernetes, where we added another protobuf field with different wire type.

Contributor

harryge00 commented Oct 13, 2018

Turned out to be a bug in my fork of Kubernetes, where we added another protobuf field with different wire type.

@harryge00 harryge00 closed this Oct 13, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment