Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Services with same port, different protocol display wrongly in kubectl and have wrong merge key #39188

Open
thockin opened this issue Dec 23, 2016 · 39 comments

Comments

@thockin
Copy link
Member

@thockin thockin commented Dec 23, 2016

User reported:

I am running a service with both TCP and UDP:

spec:
  type: NodePort
  ports:
    - protocol: UDP
      port: 30420
      nodePort: 30420
    - protocol: TCP
      port: 30420
      nodePort: 30420

but kubectl describe service shows only UDP.

Type:			NodePort
IP:			10.0.13.152
Port:			<unset>	30420/UDP
NodePort:		<unset>	30420/UDP
Endpoints:		10.244.4.49:30420

When I change the order then it shows only TCP.

This looks like using the wrong mergeKey, and indeed the code backs that up:

2508 type ServiceSpec struct {
2509 // The list of ports that are exposed by this service.
2510 // More info: http://kubernetes.io/docs/user-guide/services#virtual-ips-and-service-proxies
2511 Ports []ServicePort json:"ports,omitempty" patchStrategy:"merge" patchMergeKey:"port" protobuf:"bytes,1,rep,name=ports"

The key should probably be "name", though that can be empty if there is a single port only - is that a problem? @ymqytw ?

@mengqiy

This comment has been minimized.

Copy link
Contributor

@mengqiy mengqiy commented Dec 28, 2016

@thockin The validation code doesn't allow me to create two ports with names unspecified.

The Service "my-service" is invalid: 
* spec.ports[0].name: Required value
* spec.ports[1].name: Required value

What version did the user use?

@thockin

This comment has been minimized.

Copy link
Member Author

@thockin thockin commented Dec 28, 2016

@mengqiy

This comment has been minimized.

Copy link
Contributor

@mengqiy mengqiy commented Dec 28, 2016

Still, the port name seems like the more correct merge key

Agree.
If we want to use name as mergeKey, we should add some validation code to make sure the names are unique. And we need to take care of an empty name if there is a single port only.

@thockin This is a breaking change. So we should make this change in 1.6, right?

cc: @pwittrock

@thockin

This comment has been minimized.

Copy link
Member Author

@thockin thockin commented Dec 28, 2016

@mengqiy

This comment has been minimized.

Copy link
Contributor

@mengqiy mengqiy commented Dec 28, 2016

Can "" be a valid value for the case of single port?

Don't know yet. I need to look into it.

I guess changing merge key is technically a breaking change, but is there
any real impact?

Yes, like #36024(kubectl is broken, even with minor version skew)

@thockin

This comment has been minimized.

Copy link
Member Author

@thockin thockin commented Dec 28, 2016

@mengqiy

This comment has been minimized.

Copy link
Contributor

@mengqiy mengqiy commented Dec 28, 2016

Can "" be a valid value for the case of single port?

The mergeKey must present in the go struct to calculate patch. But object round tripping will drop empty field, which is name in this case. So simply change the mergeKeywill break the no-name single-port case.

for: "../svc.yaml": map: map[protocol:UDP port:30420 targetPort:0 nodePort:30432] does not contain declared merge key: name

We should to change the mergeKey, otherwise kubectl apply will do something wrong.
e.g. Creating a service using kubectl apply

apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  type: NodePort
  ports:
    - protocol: UDP
      port: 30420
      name: updport
      nodePort: 30420
    - protocol: TCP
      port: 30420
      name: tcpport
      nodePort: 30420
  selector:
    app: MyApp

Make change to tcpport and apply the change

    - protocol: TCP
      port: 30420
      name: tcpport
      nodePort: 30000 # Change this nodePort

kubectl get the service back

apiVersion: v1
kind: Service
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"kind":"Service","apiVersion":"v1","metadata":{"name":"my-service","creationTimestamp":null},"spec":{"ports":[{"name":"updport","protocol":"UDP","port":30420,"targetPort":0,"nodePort":30420},{"name":"tcpport","protocol":"TCP","port":30420,"targetPort":0,"nodePort":30000}],"selector":{"app":"MyApp"},"type":"NodePort"},"status":{"loadBalancer":{}}}
  name: my-service
...
spec:
  clusterIP: 10.0.0.249
  ports:
  - name: updport
    nodePort: 30000 # Wrong change
    port: 30420
    protocol: UDP
    targetPort: 30420
  - name: tcpport
    nodePort: 30420
    port: 30420
    protocol: TCP
    targetPort: 30420
  selector:
    app: MyApp
  sessionAffinity: None
  type: NodePort
status:
  loadBalancer: {}

The change goes to the wrong place.

@thockin

This comment has been minimized.

Copy link
Member Author

@thockin thockin commented Dec 28, 2016

@mengqiy

This comment has been minimized.

Copy link
Contributor

@mengqiy mengqiy commented Dec 29, 2016

Can we have an empty mergekey automatically become a random string?

Yes, it's possible but it will make the logic complicated. Because apply need original config(in annotation), current config(on API server) and modified config(in user's file) to do 3-way diff.

I'm not sure if @pwittrock @bgrant0607 will like this. Need their comments before going further.

@bgrant0607

This comment has been minimized.

Copy link
Member

@bgrant0607 bgrant0607 commented Jan 10, 2017

"Random" string -- no.

We have other cases where a single field isn't sufficient as a merge key, such as lists of object references. I think treating all fields specified by the user's config as the key works in all cases I've thought of. I'd probably make that a new patch strategy. It would probably even be backward compatible.

@pwittrock

This comment has been minimized.

Copy link
Member

@pwittrock pwittrock commented Jan 12, 2017

@bgrant0607 Are you thinking this would support only the primitive fields as part of the unique/merge key? Or would this strategy only be valid for types that contain only primitive fields?

Based on your description, the merge key for this strategy would be dynamic and defined by the client.

Have we considered supporting merge keys that are multiple values, but otherwise treated the same as single-value merge keys (in this case it could be <port,protocol> or <port,destinationport,protocol>)?

@bgrant0607

This comment has been minimized.

Copy link
Member

@bgrant0607 bgrant0607 commented Jan 12, 2017

@pwittrock No, I was thinking we'd use all fields specified by the user as the key, including nested maps. Multiple keys would also work and may be less error-prone.

@thockin

This comment has been minimized.

Copy link
Member Author

@thockin thockin commented Jan 12, 2017

@pwittrock

This comment has been minimized.

Copy link
Member

@pwittrock pwittrock commented Jan 12, 2017

@thockin I was thinking something similar yesterday. Something like:

If len(patch list) == 1 && merge key is missing from patch list entry && len(dest list) == 1 && merge key is missing from dest list entry
-> Then merge into only entry in destination list

My biggest hesitation here was that it adds some complexity to the already poorly understood merge semantics. There would probably be some weird edge cases like what happens when we add the merge key to the existing entry. e.g.

If len(patch list) == 1 && merge key exists in the list entry && len(dest list) == 1 && merge key is missing from dest list entry
-> Then merge into only entry in destination list, setting the merge key

@pwittrock

This comment has been minimized.

Copy link
Member

@pwittrock pwittrock commented Mar 7, 2017

@ymqytw Can you make sure this gets resolved in 1.7?

@pwittrock pwittrock added this to the v1.7 milestone Mar 7, 2017
@mengqiy

This comment has been minimized.

Copy link
Contributor

@mengqiy mengqiy commented Mar 7, 2017

@pwittrock If using the approach in #39188 (comment), then yes.
But if we want support combined merge key, which contains multiple keys, then I think we need a proposal first.

@pwittrock

This comment has been minimized.

Copy link
Member

@pwittrock pwittrock commented Mar 7, 2017

Lets sync up with @thockin to determine the minimal requirements for correctness. I don't want to introduce a partial solution since it will create a maintenance burden. Once the merge-key audit is complete, lets come up with a proposal.

I think it is ok if we write a design proposal. I will make sure it gets appropriately reviewed within ~weeks.

@bgrant0607 bgrant0607 added the sig/cli label Mar 9, 2017
@thockin

This comment has been minimized.

Copy link
Member Author

@thockin thockin commented Mar 17, 2017

I've run into a lot of issues with ports being PATCHed wrongly through 1.6. Do we believe these issues are fixed now? I can't seem to repro them, but I wasn't dilligent in cataloging them all...

@mengqiy

This comment has been minimized.

Copy link
Contributor

@mengqiy mengqiy commented Mar 17, 2017

I guess the pain you have when PATCHing the ports is caused by the bug of using the wrong json pkg in the api server (#42488). It has been fixed by #42489.

issues with ports:

  • #42488 number conversion issue caused by using the wrong json pkg.
  • (this issue) mergeKey for ports (port #) is not unique. The ports will still be PATCHed wrongly if the user does something like #39188 (comment)
@jamiehannaford

This comment has been minimized.

Copy link
Member

@jamiehannaford jamiehannaford commented Apr 20, 2017

Has this been fixed? I'm seeing both ports on describe:

› kubectl describe svc nginx
Name:			nginx
Namespace:		default
Labels:			<none>
Annotations:		<none>
Selector:		app=nginx
Type:			NodePort
IP:			10.0.0.133
Port:			udp	30420/UDP
NodePort:		udp	30420/UDP
Endpoints:		172.17.0.4:30420
Port:			tcp	30420/TCP
NodePort:		tcp	30420/TCP
Endpoints:		172.17.0.4:30420
Session Affinity:	None
Events:			<none>
@mengqiy

This comment has been minimized.

Copy link
Contributor

@mengqiy mengqiy commented Apr 20, 2017

@jamiehannaford Not yet. describe is not affected by this issue.
Proposal in kubernetes/community#476. Review welcome.

@pwittrock pwittrock modified the milestones: v1.8, v1.7 Jun 2, 2017
@bgrant0607

This comment has been minimized.

Copy link
Member

@bgrant0607 bgrant0607 commented Jan 23, 2018

/remove-lifecycle stale

@yue9944882

This comment has been minimized.

Copy link
Member

@yue9944882 yue9944882 commented Feb 8, 2018

Related #59482

@bgrant0607

This comment has been minimized.

Copy link
Member

@bgrant0607 bgrant0607 commented Feb 8, 2018

Yes #59482 has the same root cause.

@praseodym

This comment has been minimized.

Copy link
Contributor

@praseodym praseodym commented Apr 28, 2018

I think containerPorts in Deployments have the same problem. I can't seem to specify the same port for two different protocols.

@fabianvf

This comment has been minimized.

Copy link

@fabianvf fabianvf commented Jun 20, 2018

This issue doesn't seem related to the CLI, the issue is in the API server. Should we change the sig to api-machinery or something like that?

@fejta-bot

This comment has been minimized.

Copy link

@fejta-bot fejta-bot commented Sep 18, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@george-angel

This comment has been minimized.

Copy link
Contributor

@george-angel george-angel commented Sep 18, 2018

/remove-lifecycle stale

@fejta-bot

This comment has been minimized.

Copy link

@fejta-bot fejta-bot commented Dec 17, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@george-angel

This comment has been minimized.

Copy link
Contributor

@george-angel george-angel commented Dec 17, 2018

/remove-lifecycle stale

@skwokie

This comment has been minimized.

Copy link

@skwokie skwokie commented Aug 13, 2019

What happened:
I think I've reproduced this; please feel free to let me know if my expectation is wrong.
I've made a deployment with the following ports declaration:

ports:
  - name: https
    containerPort: 443
  - name: cport10000tcp
    containerPort: 10000
    protocol: TCP
  - name: cport10000udp
    containerPort: 10000
    protocol: UDP
  - name: cport10001udp
    containerPort: 10001
    protocol: UDP

When I describe the pod, it has skipped mentioning the port (10000) being opened for UDP.

Pod Template:
  Labels:  app=svr
           env=dev
  Containers:
   svr:
    Image:       dev-svr:0.92
    Ports:       443/TCP, 10000/TCP, 10001/UDP
    Host Ports:  0/TCP, 0/TCP, 0/UDP

Environment:

  • Kubernetes version (use kubectl version):
    $ kubectl version Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:44:30Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:36:19Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
  • Cloud provider or hardware configuration:
    Mac
  • OS (e.g: cat /etc/os-release):
    macOS Version 10.14.6
  • Kernel (e.g. uname -a):
    $ uname -a Darwin skwok-mbp.local 18.7.0 Darwin Kernel Version 18.7.0: Thu Jun 20 18:42:21 PDT 2019; root:xnu-4903.270.47~4/RELEASE_X86_64 x86_64
  • Install tools:
    Docker Desktop version 19.03.1
  • Network plugin and version (if this is a network-related bug):
  • Others:
@davidalpert

This comment has been minimized.

Copy link

@davidalpert davidalpert commented Aug 26, 2019

I can repeat that behavior, plus PATCH fails on a service or deployment described with the same port listed twice (even when each port has a unique name or protocol attribute) as it appears that the matchKey only considers port

@berlincount

This comment has been minimized.

Copy link

@berlincount berlincount commented Sep 26, 2019

Running into the same issue as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
You can’t perform that action at this time.