Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Node-local services #28610

Closed
4 tasks
therc opened this issue Jul 7, 2016 · 24 comments
Closed
4 tasks

Node-local services #28610

therc opened this issue Jul 7, 2016 · 24 comments
Assignees
Labels
area/kube-proxy kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. sig/network Categorizes an issue or PR as relevant to SIG Network.

Comments

@therc
Copy link
Member

therc commented Jul 7, 2016

Sometimes users require daemons to run on each node. Scheduling them is easy, with DaemonSets. Discovery is more complicated and currently requires e.g. iptables hacks that bypass Kubernetes altogether. Possible use cases:

At the very least, this new kind of services should support 1:1 service->daemon mapping. For cases like loasd, it would be ideal to support N:1 (a number of services all getting forwarded to the same daemon, with the daemon able to tell which service each connection was meant for).

cc @thockin

  • Write proposal
  • Implementation
    • Extend Service API
    • Extend kube-proxy and/or kube-dns
@Random-Liu
Copy link
Member

Random-Liu commented Jul 7, 2016

@thockin This is also very useful for node problem detector. Both node problem detector and kube-proxy are running as DaemonSet. If we want kube-proxy to report problem to node problem detector via network, we can rely on specific port in host network now.
However, if we have this, it could be cleaner. :)

@therc
Copy link
Member Author

therc commented Jul 7, 2016

I created #28637. Feel free to poke holes in it.

@gtaylor
Copy link
Contributor

gtaylor commented Jul 11, 2016

To add some more examples to the list that we're trying to figure out at Reddit:

  • Per-node pgbouncer
  • synapse (already in use outside of k8s)
  • tallier (statsd compatible)

@bgrant0607
Copy link
Member

@sandys
Copy link

sandys commented Nov 29, 2016

Another example i would like to add is linkerd. https://linkerd.io

@ivanilves
Copy link

👍

@k8s-github-robot k8s-github-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label May 31, 2017
@k8s-github-robot
Copy link

@therc There are no sig labels on this issue. Please add a sig label by:
(1) mentioning a sig: @kubernetes/sig-<team-name>-misc
(2) specifying the label manually: /sig <label>

Note: method (1) will trigger a notification to the team. You can find the team list here.

@cmluciano
Copy link

/sig network

@k8s-ci-robot k8s-ci-robot added the sig/network Categorizes an issue or PR as relevant to SIG Network. label Jun 1, 2017
@k8s-github-robot k8s-github-robot removed the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Jun 1, 2017
@fabiand
Copy link
Contributor

fabiand commented Jul 14, 2017

Ping?

@arussellsaw
Copy link

@thockin is this still the relevant issue? you mentioned this work was ongoing at the London Kubernetes meetup a few weeks ago

@igorpeshansky
Copy link

This issue is definitely still relevant for our team. Are there any obstacles to merging #28637?

@Vlaaaaaaad
Copy link

This is also relevant for us. Any updates?

@ColinChartier
Copy link

Kludge until this gets resolved:

  1. Make a ClusterIP service with no selector whose endpoints are the same as the service's IP, e.g., with kubectl edit endpoints/kubernetes (my service ip is 10.96.0.1)
  2. Make a DNAT iptables rule to redirect the kubernetes request to the node itself, e.g., if your WAN ip is 1.1.1.1, the command would be: sudo iptables -t nat -D OUTPUT --destination 10.96.0.1 -p tcp --dport 443 -j DNAT --to-destination 1.1.1.1:443 (this redirects traffic from 10.96.0.1:443 to 1.1.1.1:443)

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 21, 2018
@fabiand
Copy link
Contributor

fabiand commented Feb 21, 2018

Please keep this in mind.
For daemonsets this is somethign we want.

@jobrs
Copy link

jobrs commented Mar 5, 2018

👍

@fabiand
Copy link
Contributor

fabiand commented Mar 5, 2018

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 5, 2018
@bgrant0607 bgrant0607 added the lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. label Jun 1, 2018
@techdragon
Copy link

Can anyone familiar with the work, give an update on how far off we are from having this fixed? The arbitrary topology stuff seems cool, but its also giving me a growing feeling that a case of architecture astronautics is taking place. Has anyone built a node local proxy or something else to fix this? LinkerD looks interesting my primary use for this needs UDP support. (Metrics/StatsD, etc)

@johnbelamaric
Copy link
Member

johnbelamaric commented Nov 15, 2018 via email

@techdragon
Copy link

@johnbelamaric Thanks for referencing that KEP. I didn't see that one while I was reading and following the different chain of links to other issues/repos/etc to see how things were progressing.

@thockin thockin added the triage/unresolved Indicates an issue that can not or will not be resolved. label Mar 8, 2019
@lentzi90
Copy link
Contributor

lentzi90 commented Apr 4, 2019

Cross linking, since the KEPs moved. The KEP for this is now here.

Issue available here targeted for v1.15

@freehan freehan added kind/feature Categorizes issue or PR as related to a new feature. and removed triage/unresolved Indicates an issue that can not or will not be resolved. labels May 16, 2019
@draveness
Copy link
Contributor

@johnbelamaric
Copy link
Member

Available as alpha in 1.17: https://kubernetes.io/docs/concepts/services-networking/service-topology/

/close

@k8s-ci-robot
Copy link
Contributor

@johnbelamaric: Closing this issue.

In response to this:

Available as alpha in 1.17: https://kubernetes.io/docs/concepts/services-networking/service-topology/

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/kube-proxy kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. sig/network Categorizes an issue or PR as relevant to SIG Network.
Projects
None yet
Development

No branches or pull requests