Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Services allow duplicate targetPort #47222

Closed
thockin opened this issue Jun 9, 2017 · 22 comments · Fixed by #47284
Closed

Services allow duplicate targetPort #47222

thockin opened this issue Jun 9, 2017 · 22 comments · Fixed by #47284
Assignees
Labels
area/api Indicates an issue on api area. kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/network Categorizes an issue or PR as relevant to SIG Network.

Comments

@thockin
Copy link
Member

thockin commented Jun 9, 2017

It is possible to express a Service where multiple front-side ports target the same back-side port. This is not very useful and may cause trouble for implementations of Services.

apiVersion: v1
kind: Service
metadata:
  name: hostnames-2
spec:
  ports:
  - name: a
    port: 80
    targetPort: 9376
  - name: b
    port: 81
    targetPort: 9376
  selector:
    run: hostnames
@thockin thockin added area/api Indicates an issue on api area. sig/network Categorizes an issue or PR as relevant to SIG Network. labels Jun 9, 2017
@odeke-em
Copy link

odeke-em commented Jun 9, 2017

@thockin just for the purpose of clarity, are you requesting that duplicate targetPorts
should be rejected or the opposite?
PS: I am new to Kubernetes so please pardon the dumb question :)

@thockin
Copy link
Member Author

thockin commented Jun 9, 2017 via email

@odeke-em
Copy link

odeke-em commented Jun 9, 2017

Aye aye, thanks for the clarificiation!

@xiangpengzhao
Copy link
Contributor

/assign

@k8s-ci-robot
Copy link
Contributor

@xiangpengzhao: GitHub didn't allow me to assign the following users: xiangpengzhao.

Note that only kubernetes members can be assigned.

In response to this:

/assign

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-github-robot pushed a commit that referenced this issue Jun 30, 2017
Automatic merge from submit-queue

Validate if service has duplicate targetPort

**What this PR does / why we need it**:
Validate if a service has dup targetport

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #47222

**Special notes for your reviewer**:
/cc @thockin 
@kubernetes/sig-network-pr-reviews

**Release note**:

```release-note
NONE
```
@jhorwit2
Copy link
Contributor

jhorwit2 commented Oct 5, 2017

@thockin what problems did this cause with services? This change broke backwards compatibility for services that depended on this feature.

For example, lets take the load balancer service

kind: Service
apiVersion: v1
metadata:
  name: nginx-service
  annotations:
    service.beta.kubernetes.io/oci-load-balancer-ssl-ports: "443"
    service.beta.kubernetes.io/oci-load-balancer-tls-secret: ssl-certificate-secret
spec:
  selector:
    app: nginx
  type: LoadBalancer
  ports:
  - name: http
    port: 80
    targetPort: 80
  - name: https
    port: 443
    targetPort: 80

The load balancer here terminates SSL so the backend is always port 80; however, you want to make sure you support both protocols so having the same targetPort here is a valid use case.

@thockin
Copy link
Member Author

thockin commented Oct 9, 2017

This is an interesting (ab)use of Service. Not really what I had in mind - the LB in a Service is generally assumed to be L4. I'll try to dredge up the issues that this was causing.

@kubernetes/sig-network-api-reviews

@k8s-ci-robot k8s-ci-robot added the kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API label Oct 9, 2017
@jhorwit2
Copy link
Contributor

jhorwit2 commented Oct 9, 2017

FYI, the AWS in-tree provider does non layer4 things as well.

@thockin
Copy link
Member Author

thockin commented Oct 9, 2017

One issue is that this REQUIRES connection tracking when we otherwise might not need it.

Consider two ServicePorts:

  • my-service:80
  • mys-service:443

Let's say we map both services to a backend on port 8080, but we pass-through the VIP, so the pod sees packets as dest = my-service:8080.

We MUST conntrack this VIP because we don't know whether to map the src port to 80 or 443. If service backends have unique TargetPorts, then we wouldn't have this issue. Port rewrites are at worst stateless, and at best can engage direct-return mode (no rewrites).

It's worth noting that the PR which fixes this, which is proposed for revert, doesn't quite go far enough.  It prevents a Service from using a given number twice and from using a given name twice, but not from using a number and a name that resolves to that number.  That is a much trickier (impossible to do statically) proposition. :(

Kind of a mess for getting high-perf Service networking.

@thockin thockin self-assigned this Oct 9, 2017
@jhorwit2
Copy link
Contributor

jhorwit2 commented Oct 9, 2017

@thockin was this an issue that came up during performance testing or IRL?

@ese
Copy link

ese commented Oct 20, 2017

Anyone has a workaround to deal with this? I think there is a lot of people using AWS ELBs on this way

@tamalsaha
Copy link
Member

tamalsaha commented Oct 20, 2017

@ese, if your use-case is Ingress, then you can set ingress.kubernetes.io/ssl-redirect: false. This should avoid generating the invalid Service spec. Obviously, you will lose the redirect behavior.

@microwaves
Copy link

This is not an abuse of Service, but a quite common and wide-adopted pattern, especially in the AWS world.

@k8s-ci-robot
Copy link
Contributor

@xiangpengzhao: you can't re-open an issue/PR unless you authored it or you are assigned to it.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot
Copy link
Contributor

@xiangpengzhao: you can't re-open an issue/PR unless you authored it or you are assigned to it.

In response to this:

/assign
/reopen
for further discussion

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@xiangpengzhao
Copy link
Contributor

xiangpengzhao commented Oct 24, 2017

/reopen
for further discussion.

@k8s-ci-robot k8s-ci-robot reopened this Oct 24, 2017
@caseydavenport
Copy link
Member

@thockin I suspect we should probably revert the original PR. While I understand it might not have been the intention for Services, if I understand this correctly then it's a breaking change to a v1 API.

Can we at least point to any documentation that indicates this was a code bug rather than a supported behavior?

@caseydavenport
Copy link
Member

Sentiment on today's sig call was that we should revert the original commit, since it broke lots of users, and then revisit to see if there's another solution.

@xiangpengzhao
Copy link
Contributor

FYI: the revert PR is #53576.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 7, 2018
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 9, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/api Indicates an issue on api area. kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/network Categorizes an issue or PR as relevant to SIG Network.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

10 participants