Skip to content
This repository has been archived by the owner on Jul 30, 2021. It is now read-only.

Another service could win race for dns service IP allocation #136

Closed
aaronlevy opened this issue Sep 17, 2016 · 7 comments
Closed

Another service could win race for dns service IP allocation #136

aaronlevy opened this issue Sep 17, 2016 · 7 comments
Labels
kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/Pmaybe

Comments

@aaronlevy
Copy link
Contributor

We need a pre-assigned service IP for the kubernetes dns service - but it's possible when creating all assets (equivalent of kubectl create -f cluster/manifests) that another service is randomly assigned this ip (defaults to 10.3.0.10).

One option would just to force the kube-dns service to be created first - but this seems less than ideal.

@kenan435
Copy link
Contributor

kenan435 commented Sep 21, 2016

@aaronlevy Besides prioritizing asset creation order we could prefix asset file names (1-yz.yaml, 2-xyz.yaml etc...) which IMHO is not a good approach. Anyways I made a pq using the former approach. Let me know what you think.

@aaronlevy
Copy link
Contributor Author

I'm not sure that we need to solve this right away (or potentially in bootkube at all). It is an edge case at the moment, and won't affect bootkube itself right now because kube-dns is the only service we create.

However, a bootkube user was populating the cluster/manifests directory with more addons/services, which meant they were all being created at the same time (and one happened to be randomly assigned the dns service IP).

The asset naming scheme is not a bad idea either - this is a systemd convention for managing drop-in units (e.g. 01-foo.conf, 02-foo.conf).

I think it might be an option to solve this upstream as well where maybe we inform the controller-manager of a range which will be statically assigned.

For example:

--service-cluster-ip-range=10.3.0.0/24
--service-cluster-ip-range-static=10.3.0.0/28

I'll open an upstream issue to discuss the above option.

@dghubble
Copy link
Contributor

dghubble commented Sep 21, 2016

We have another way to add our addons so this probably isn't pressing for us in particular. A simpler solution might be for bootkube to used reserved cluster/system for itself and cluster/manifest for deployer additions. Just to separate manifests bootkube offers guarantees about, from free-form deployer additions.

@k8s-ci-robot k8s-ci-robot added kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. and removed kind/friction labels Aug 23, 2018
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 25, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 25, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/Pmaybe
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants