Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CoreDNS-Operator: Allow multiple collaborators to configure forwarding for DNS zones #88

Closed
christianang opened this issue Oct 20, 2020 · 4 comments
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@christianang
Copy link

Problem
We would like to provide the ability for multiple collaborators (controllers or humans) to add upstream DNS resolvers for DNS zones.

The Corefile is a global resource. This makes it difficult for a custom controller to add its own configuration to the Corefile. A custom controller might have to assume no one else would change the Corefile and overwrite any outside changes, which is not great because it reduces flexibility. Alternatively, a controller could have logic to determine what parts it added, and therefore can change, which isn’t impossible, but complex and potentially brittle e.g a human operator cannot tell if a part of the Corefile was added by a controller and they attempt to change it. Additionally, while the CoreDNS-operator can handle Corefile migrations, a custom controller would still have to be knowledgeable about compatibility of the Corefile if it is going to read and write to it adding additional complexity.

Strawdog Proposal
We propose a new CRD that allows someone/something to declare their intent to forward a DNS zone to a server and the CoreDNS-Operator can generate a server block and append it to the Corefile. This allows the users to not have to deal with managing the entire Corefile themselves (e.g migrating, merging).

This is what we were thinking the CRD would look like:

---
apiVersion: coredns.addons.x-k8s.io/v1alpha1
kind: DNSZone
metadata:
  name: my-company.internal
  namespace: kube-system
spec:
  zoneName: my-company.internal
  forwardTo: 10.100.1.1

The CoreDNS Operator would generate a CoreDNS server block and append it to the Corefile given in the CoreDNS resource.

This is more of a conversation starter and how we are thinking of it, but not tied to this solution.

Alternatives Considered

We considered a more generic “CorefileFragment” CRD that would take a CoreDNS server block to append to the Corefile. However, then the author of the “fragment” is responsible for migrations, which we’re trying to avoid.

We also considered adding a new field to the CoreDNS CRD that could take in the zones. Perhaps something similar to what RedHat’s DNS Operator does. This would then generate server blocks for each zone and append them to the contents of the corefile field in CoreDNS. I foresee a similar issue where modifying the Corefile field directly with custom controllers can result in update conflicts, but that isn’t impossible to overcome.

Tagging @rajansandeep, from what I understand you maintain CoreDNS-Operator and it would be good to get your feedback.

cc: @neolit123 @ncdc

@chrisohaver
Copy link

One thing that springs to mind is a more direct means of CoreDNS handling stub domains, wherein via a new CoreDNS plugin it would watch the CRD and forward traffic accordingly. In that way, the Operator would not need make changes to the Corefile.

@christianang
Copy link
Author

I do like that idea, it would seem preferable to have CoreDNS do things directly rather than changing the Corefile. I am less familiar with the process of contributing directly to CoreDNS. This feels like it might require broader buy-in from the community. I'm also imagining we would want this plugin to be in-tree so a user can easily turn the feature on within their cluster. Let me know if there is anything I can do to continue pursuing this idea with CoreDNS.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 2, 2021
@christianang
Copy link
Author

Currently planning to do this as a CoreDNS plugin instead.

/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests

4 participants