Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Resolve external IP addresses #242

Closed
mat1010 opened this issue Jun 5, 2018 · 9 comments

Comments

@mat1010
Copy link

commented Jun 5, 2018

We'd like to access kube-dns from outside the k8s cluster to use it as discovery service also for services outside of the cluster. We were able to achive this by exposing kube-dns through a dedicated service with an external ip address (which is a zone internal GCP address).

A drawback is that it seems not possible to resolve the external ip of an service through kube-dns.

Would it be feasible to have an additional zone to resolve the external ip address of a service?

@mat1010 mat1010 changed the title Resolve external IPs Resolve external IP addresses Jun 5, 2018
@thockin

This comment has been minimized.

Copy link
Member

commented Jun 5, 2018

@johnbelamaric

This comment has been minimized.

Copy link
Contributor

commented Jun 5, 2018

@mat1010

This comment has been minimized.

Copy link
Author

commented Jun 5, 2018

I don't think this is a kube-dns problem, unless I misunderstand. You're adding an IP above and beyond what it understands.

The external ip address, which will be requested by k8s from GCP, is known to k8s at some point after it has been assigned by GCP - right? So from my rough understanding it should be possible to "just" create an additional A record with the same name, but within a different zone, which holds this external address. Of course kube-dns might just hold this information and might not be the place where the information about the external ip addresses of services is gathered to create those records.
I'm not that into detail about the whole API call flow of who is doing which call at which point to which API, so please correct my if I'm wrong with my assumptions about the whole process of adding a record to kube-dns.

You could add that IP to GCP's DNS or you could add an empty-selector service with a manual endpoint to "trick" kube-dns into serving something...

This might work but I would like to avoid to have to manually create a service after the address has been assigned to my service. This should be automated in the same way as the cluster ips are automatically added as A records to kube-dns.

As @johnbelamaric noted: coredns/coredns#1851 is exactly my usecase.

Btw.: Thanks for the rapid replies.

@thockin

This comment has been minimized.

Copy link
Member

commented Jun 5, 2018

@mat1010

This comment has been minimized.

Copy link
Author

commented Jun 5, 2018

I'm not against that sort of idea, but it's a net new surface to consider - not just a tiny extension.

Thanks for pointing this out. At least that verifies that this feature is not there right now, so I don't have to search any deeper for a hidden / undocumented implementation. As an interim solution we might go for the empty-selector service to still make use of the already available kubernetes services.

@fejta-bot

This comment has been minimized.

Copy link

commented Sep 3, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@fejta-bot

This comment has been minimized.

Copy link

commented Oct 3, 2018

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@fejta-bot

This comment has been minimized.

Copy link

commented Nov 2, 2018

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

commented Nov 2, 2018

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
5 participants
You can’t perform that action at this time.