-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support Multicast DNS (mDNS) / RFC-6762 #1604
Comments
It seems this can be handled by https://github.com/openshift/mdns-publisher |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
This would be a huge win for local kubernetes installs - to enhance the need - I currently run traefik on my home cluster to handle reverse proxy / routing traffic... once it gets to the cluster... and so the ability to add local mdns would make it so that I don't need to always use my dynamic dns name and avoid the round trip. In addition, I'd be able to expose some things on my local network that I don't want exposed on the interwebs. Ultimately, it's just hard to route traffic to my cluster on my local network. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
Rotten issues close after 30d of inactivity. Send feedback to sig-contributor-experience at kubernetes/community. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I believe this is still relevant |
@LorbusChris: You can't reopen an issue/PR unless you authored it or you are a collaborator. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
https://github.com/blake/external-mdns may help anyone looking for an alternative |
What would you like to be added:
This enhancement request is for a provider (or documentation added to an existing provider) that brings Multicast DNS (mDNS) or RFC-6762 support to External DNS.
Why is this needed:
mDNS support could be used in local Kubernetes clusters (like k3s) to provide arbitrary "foo.local" services.
During my short investigation into this, it appeared that the CoreDNS provider could offer this support if https://github.com/openshift/coredns-mdns support was included. However, this plugin appears to only offer ready-only support, relaying queries to the multicast channel without providing a means to store new records.
A new provider for external-dns could be added, using https://github.com/hashicorp/mdns to handle queries and serving records this provider is responsible for. I believe the difficultly in this approach is in persisting the records that the cluster is responsible for somewhere where all of the instances of external-dns can benefit from them (perhaps a configmap). I don't believe the other nodes in the network will store (authoritatively) new mDNS records, it seems each node must respond to queries for its own records.
It may be more ideal to find, extend, or create an in-cluster DNS service (https://github.com/flix-tech/k8s-mdns ?) that supports mDNS queries (including storing/serving added/updated records). With this, external-dns could interact with the service using the existing RFC-2136 plugin.
Does the internal Kubernetes use of
cluster.local
prohibit this broader capability by nodes outside of the cluster in some way? Is there an additional limitation that in-cluster resources may not be able to benefit from mDNS names?The text was updated successfully, but these errors were encountered: