Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support Multicast DNS (mDNS) / RFC-6762 #1604

Closed
displague opened this issue May 25, 2020 · 11 comments
Closed

Support Multicast DNS (mDNS) / RFC-6762 #1604

displague opened this issue May 25, 2020 · 11 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@displague
Copy link
Member

displague commented May 25, 2020

What would you like to be added:

This enhancement request is for a provider (or documentation added to an existing provider) that brings Multicast DNS (mDNS) or RFC-6762 support to External DNS.

Why is this needed:

mDNS support could be used in local Kubernetes clusters (like k3s) to provide arbitrary "foo.local" services.


During my short investigation into this, it appeared that the CoreDNS provider could offer this support if https://github.com/openshift/coredns-mdns support was included. However, this plugin appears to only offer ready-only support, relaying queries to the multicast channel without providing a means to store new records.

A new provider for external-dns could be added, using https://github.com/hashicorp/mdns to handle queries and serving records this provider is responsible for. I believe the difficultly in this approach is in persisting the records that the cluster is responsible for somewhere where all of the instances of external-dns can benefit from them (perhaps a configmap). I don't believe the other nodes in the network will store (authoritatively) new mDNS records, it seems each node must respond to queries for its own records.

It may be more ideal to find, extend, or create an in-cluster DNS service (https://github.com/flix-tech/k8s-mdns ?) that supports mDNS queries (including storing/serving added/updated records). With this, external-dns could interact with the service using the existing RFC-2136 plugin.

Does the internal Kubernetes use of cluster.local prohibit this broader capability by nodes outside of the cluster in some way? Is there an additional limitation that in-cluster resources may not be able to benefit from mDNS names?

@displague displague added the kind/feature Categorizes issue or PR as related to a new feature. label May 25, 2020
@displague
Copy link
Member Author

serving records this provider is responsible for. I believe the difficultly in this approach is in persisting the records that the cluster is responsible for somewhere where all of the instances of external-dns can benefit from them (perhaps a configmap). I don't believe the other nodes in the network will store (authoritatively) new mDNS records, it seems each node must respond to queries for its own records.

It seems this can be handled by https://github.com/openshift/mdns-publisher

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 16, 2020
@seanmalloy
Copy link
Member

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 18, 2020
@Bwvolleyball
Copy link

Bwvolleyball commented Dec 24, 2020

This would be a huge win for local kubernetes installs - to enhance the need - I currently run traefik on my home cluster to handle reverse proxy / routing traffic... once it gets to the cluster... and so the ability to add local mdns would make it so that I don't need to always use my dynamic dns name and avoid the round trip. In addition, I'd be able to expose some things on my local network that I don't want exposed on the interwebs.

Ultimately, it's just hard to route traffic to my cluster on my local network.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 24, 2021
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 23, 2021
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@LorbusChris
Copy link
Contributor

I believe this is still relevant
/reopen

@k8s-ci-robot
Copy link
Contributor

@LorbusChris: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

I believe this is still relevant
/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@displague
Copy link
Member Author

https://github.com/blake/external-mdns may help anyone looking for an alternative

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

6 participants