Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reuse nodePort on kubectl apply service #23551

Closed
yvespp opened this issue Mar 28, 2016 · 34 comments
Closed

Reuse nodePort on kubectl apply service #23551

yvespp opened this issue Mar 28, 2016 · 34 comments
Assignees
Labels
area/app-lifecycle area/kubectl priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Milestone

Comments

@yvespp
Copy link
Contributor

yvespp commented Mar 28, 2016

In release 1.2 the NodePort of a service changes every time the service get's applied:

Example service svc-test.yaml:

apiVersion: v1
kind: Service
metadata:
  name: test
spec:
  type: NodePort
  ports:
  - port: 80
    protocol: TCP
    name: http

First apply, service doesn't exist:

lodur:config yves$ kubectl apply -f svc-test.yaml
service "test" created

lodur:config yves$ kubectl get svc test -o yaml
kubectl apply -f svc-test.yaml
apiVersion: v1
kind: Service
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: '{"kind":"Service","apiVersion":"v1","metadata":{"name":"test","creationTimestamp":null},"spec":{"ports":[{"name":"http","protocol":"TCP","port":80,"targetPort":0}],"type":"NodePort"},"status":{"loadBalancer":{}}}'
  creationTimestamp: 2016-03-28T19:55:39Z
  name: test
  namespace: default
  resourceVersion: "1603"
  selfLink: /api/v1/namespaces/default/services/test
  uid: 0bd80681-f51f-11e5-8961-080027242396
spec:
  clusterIP: 10.247.144.40
  ports:
  - name: http
    nodePort: 31585
    port: 80
    protocol: TCP
    targetPort: 80
  sessionAffinity: None
  type: NodePort
status:
  loadBalancer: {}

Second apply:

lodur:config yves$ kubectl apply -f svc-test.yaml
service "test" configured

lodur:config yves$ kubectl get svc test -o yaml
apiVersion: v1
kind: Service
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: '{"kind":"Service","apiVersion":"v1","metadata":{"name":"test","creationTimestamp":null},"spec":{"ports":[{"name":"http","protocol":"TCP","port":80,"targetPort":0}],"type":"NodePort"},"status":{"loadBalancer":{}}}'
  creationTimestamp: 2016-03-28T19:55:39Z
  name: test
  namespace: default
  resourceVersion: "1618"
  selfLink: /api/v1/namespaces/default/services/test
  uid: 0bd80681-f51f-11e5-8961-080027242396
spec:
  clusterIP: 10.247.144.40
  ports:
  - name: http
    nodePort: 32124
    port: 80
    protocol: TCP
    targetPort: 80
  sessionAffinity: None
  type: NodePort
status:
  loadBalancer: {}

The nodePort changed form 31585 to 32124.

It would be nice if the port stayed the same:

  • External Loadbalancer that use the nodePort don't have to be updated. -> We currently have a half automated LB that can't deal with changing nodePorts after the services have been created.
  • Handy for testing your kube config, you don't have to lookup the new port every time you apply the config.
@yvespp yvespp changed the title Reuse NodePort on kubectl apply svc Reuse nodePort on kubectl apply service Mar 28, 2016
@a-robinson
Copy link
Contributor

@kubernetes/goog-cluster

@thockin
Copy link
Member

thockin commented Mar 31, 2016

I think what's happening is that you are PUTing a yaml blob that has no NodePort value, which defaults to 0, which means "allocate one for me".

@bgrant0607

This has come up a number of times specifically wrt services. The canned answer is "you have to read it, mutate it, and write it back if you want to preserve fields with defaulted values". This is principally sound but it is unsatisfying in cases such as this (apply just makes it easier to run afoul of this).

In fact, you SHOULD get an error here saying:

The Service "test" is invalid.
spec.clusterIP: Invalid value: "": field is immutable

Are you explicitly setting ClusterIP in your yaml and just not showing it here?

I want to offer again that we bend the principles here. What if we allowed for PUTs of fields that are auto-allocated by default to assume that if the value is not provided on the PUT we should retain the value from the old version. I'm not sure if we can tell "unset" from "set to 0" for nodePort, but even allowing 0 to mean "retain" is OK to me.

Strictly speaking, we ARE allocating a value, it just happens to be the exact same as the value that we just released. The only 2 fields I know of for which this applies would be service.spec.ports[*].nodePort and service.spec.clusterIP.

What think, Brian?

@bgrant0607
Copy link
Member

cc @adohe @lavalamp @jackgr @ghodss

@bgrant0607 bgrant0607 added this to the v1.3 milestone Mar 31, 2016
@bgrant0607 bgrant0607 added the priority/backlog Higher priority than priority/awaiting-more-evidence. label Mar 31, 2016
@bgrant0607
Copy link
Member

This problem is caused by 2 things:

  1. Defaulting in kubectl Eliminate round-trip conversion of API objects in kubectl convert #3955. We plan to fix that in 1.3.
  2. Not distinguishing between unset and Go default values. We need to convert every optional field without a built-in nil value to a pointer.

@bgrant0607
Copy link
Member

I am opposed to changing PUT semantics. PUT is expected to be used in read-modify-write situations. kubectl apply actually uses PATCH.

@bgrant0607
Copy link
Member

cc @kubernetes/kubectl

@thockin
Copy link
Member

thockin commented Mar 31, 2016

Unless we handle this gracefully O(soon) then apply is effectively useless
for services. Do you know if the patch generation logic will work properly
if the optional fields are converted to pointers? I guess I can try it,
but I won't have time in the next week or maybe more.

On Wed, Mar 30, 2016 at 9:23 PM, Brian Grant notifications@github.com
wrote:

I am opposed to changing PUT semantics. PUT is expected to be used in
read-modify-write situations. kubectl apply actually uses PATCH.


You are receiving this because you are on a team that was mentioned.
Reply to this email directly or view it on GitHub
#23551 (comment)

@adohe-zz
Copy link

I will look into this this weekend.

@yvespp
Copy link
Contributor Author

yvespp commented Mar 31, 2016

The svc-test.yaml file I posted is the actual file I used for the test, I didn't include the clusterIP or the nodePort.
The clusterIP did not change in my test, only the nodePort does.

@lavalamp
Copy link
Member

It's not obvious to me that this is caused by the kubectl round-tripping, but if so, @krousey is working on that.

@adohe-zz
Copy link

adohe-zz commented Apr 5, 2016

@bgrant0607 @lavalamp I spent a little time on this, and took a deep look. As @lavalamp said, I believe this is not caused by the kubectl round-tripping, the patch in this example would be: {"metadata":{"creationTimestamp":null},"spec":{"ports":[{"name":"http","port":80,"protocol":"TCP","targetPort":0}]}}, after merge this patch, the spec.nodePort of the updated service will be reset to 0, and then we will re-assign a newly random port to this service.

@lavalamp lavalamp added priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. and removed priority/backlog Higher priority than priority/awaiting-more-evidence. labels Apr 5, 2016
@lavalamp
Copy link
Member

lavalamp commented Apr 5, 2016

Alright, we'll let @krousey close this when he removes the round tripping.

@adohe-zz
Copy link

adohe-zz commented Apr 6, 2016

@lavalamp so remove the round tripping will fix this issue? am I right?

@bgrant0607
Copy link
Member

@adohe @lavalamp

In order for apply to work properly, we need to be able to distinguish unset from Go default. targetPort should not be 0.

    TargetPort intstr.IntOrString `json:"targetPort,omitempty"`

We'll need to change all optional fields to pointers.

In this case, could we make IntOrString default to empty string instead?

However, it not clear why nodePort should be reallocated on patch:

if nodePort != 0 {

@adohe-zz
Copy link

adohe-zz commented Apr 6, 2016

@bgrant0607 in this simple example, after apply patch, the service nodePort

nodePort := servicePort.NodePort
will be 0, then nodePort will be reallocated.

@adohe-zz
Copy link

adohe-zz commented Apr 7, 2016

Also the define of nodePort is:

NodePort int32 `json:"nodePort,omitempty"`

it's impossible to distinguish unset from Go default for nodePort.

@thockin
Copy link
Member

thockin commented Apr 7, 2016

That's easy enough to change, though. Will that fix this issue?

On Wed, Apr 6, 2016 at 8:42 PM, TonyAdo notifications@github.com wrote:

Also the define of nodePort is:

NodePort int32 json:"nodePort,omitempty"

it's impossible to distinguish unset from Go default for nodePort.


You are receiving this because you are on a team that was mentioned.
Reply to this email directly or view it on GitHub
#23551 (comment)

@adohe-zz
Copy link

adohe-zz commented Apr 7, 2016

for now in kubectl, we just replace spec.nodePort with modified map(which comes from input), in the api-server when we get the patch, we do replace too, if we can do some merge in kubectl, I think we can keep the nodePort in the final patch.

@smarterclayton
Copy link
Contributor

How many other fields are broken like this? Server allocated fields that
can be reset? externalIPs?

On Thu, Apr 7, 2016 at 5:27 AM, TonyAdo notifications@github.com wrote:

for now in kubectl, we just replace spec.nodePort with modified map(which
comes from input), in the api-server when we get the patch, we do replace
too, if we can do some merge in kubectl, I think we can keep the nodePort
in the final patch.


You are receiving this because you are on a team that was mentioned.
Reply to this email directly or view it on GitHub
#23551 (comment)

@adohe-zz
Copy link

@bgrant0607 I just meet some problems when I try to update the nodePort type to intstr.IntOrString, and the error is:

Some API files are missing the required field descriptions.
Add description tags to all non-inline fields in the following files:
  pkg/api/v1/types.go

ERROR!
Some docs are out of sync between CLI and markdown.
To regenerate docs, run:
  hack/update-generated-docs.sh

ERROR!
Some conversions functions need regeneration.
To regenerate conversions, run:
  hack/update-generated-conversions.sh

ERROR!
Swagger type documentation needs to be updated.
To regenerate the spec, run:
  hack/update-generated-swagger-docs.sh
ERROR!
Swagger spec needs to be updated.
To regenerate the spec, run:
  hack/update-swagger-spec.sh

Aborting commit

I did try to run above scripts to update generated code, but just failed. could you tell me what I have to do after I update types.go or how can I update types.go? I just can not find some reference documents.

@bgrant0607
Copy link
Member

@adohe We don't want to change nodePort to IntOrString. In this case, we could probably interpret 0 as unset.

In general, we'd need to change such a field to a pointer. The field would need to be changed in both api/types.go and in api/v1/types.go.

However, the patch really shouldn't have trashed nodePort.

Regardless, I don't think we want to reallocate nodePort if one had already been allocated for a given service port.

@adohe-zz
Copy link

@bgrant0607 I take a deep look at this issue, and just found the root cause of nodePort cannot reuse is that we do simple replace for ports in kubectl and apiserver, I think if we want to reuse nodePort, we need to: 1. try to merge ports instead of replace 2. change nodePort to pointer or we have to update kubectl patch logical here

@bgrant0607
Copy link
Member

@adohe Good catch!

We need to specify that this field should be merged:
https://github.com/kubernetes/kubernetes/blob/master/pkg/api/v1/types.go#L1783

by adding patchStrategy:"merge" patchMergeKey:"port"

@adohe-zz
Copy link

@bgrant0607 I did try to do this, add this patchStrategy:"merge" patchMergeKey:"nodePort", but we still have to change nodePort field to a pointer, I tried to update this to NodePort *int32, and then I found it hard to find a good way to update the generated files, I did try to run hack/update-codecgen.sh and hack/after-build/run-codegen.sh but still fail in this type change.

@adohe-zz
Copy link

As for now I think it is really hard to change the types.go, we have lots of generation work to do, and there is no such document about this.

@bgrant0607
Copy link
Member

@adohe The patchMergeKey should be port, not nodePort.

@bgrant0607
Copy link
Member

If you still find you need to update nodePort to a pointer and the compilation errors are in generated code, comment out that code and then try to regenerate it:

https://github.com/kubernetes/kubernetes/blob/master/docs/devel/api_changes.md#edit-version-conversions

@bgrant0607 bgrant0607 assigned adohe-zz and unassigned krousey Apr 12, 2016
@adohe-zz
Copy link

@bgrant0607 thanks very much. :) I will try that.

@thockin
Copy link
Member

thockin commented Apr 16, 2016

Any update on this?

@thockin
Copy link
Member

thockin commented Apr 16, 2016

I see the PR, sorry. Thanks!!

k8s-github-robot pushed a commit that referenced this issue Apr 20, 2016
Automatic merge from submit-queue

Fix unintended change of Service.spec.ports[].nodePort during kubectl apply

Please refer #23551 for more detail. @bgrant0607 I think this simple fix should be ok to reuse nodePort. @thockin ptal.

Release note: Fix unintended change of `Service.spec.ports[].nodePort` during `kubectl apply`.
@adohe-zz
Copy link

we can close this now.

@thockin thockin closed this as completed Apr 21, 2016
@krmayankk
Copy link

can someone explain who understands the tags patchStrategy:"merge" patchMergeKey:"port"` and acts on then and where is this happening in the code?

@janetkuo
Copy link
Member

@krmayankk This is telling kubectl apply to merge the ports based on the port field, see https://github.com/kubernetes/kubernetes/blob/master/docs/devel/api-conventions.md#strategic-merge-patch

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/app-lifecycle area/kubectl priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

No branches or pull requests

10 participants