Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Proposal] Refactor network config from cluster spec into a New Network Spec #4138

Open
geojaz opened this issue Dec 23, 2017 · 6 comments
Open
Labels
lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.

Comments

@geojaz
Copy link
Member

geojaz commented Dec 23, 2017

This was discussed at kops office hours on 12/22 and I believe folks have been thinking about this idea for some time.

I propose that it's time to pull network config (VPC, subnets, routes, etc) out of the cluster spec and into a first class networking spec/object. We currently have the ability to configure networking in the cluster spec, but as kops networking moves away from the aws vision of the world and morphs into a more agnostic cloud/bare metal networking, I think a refactor like this would let us address more varied configs in a more manageable way.

First steps are probably to pull the existing descriptors out from the cluster spec and move them into a new spec (which will be called Network). Next step is making this spec general enough to address the needs of the community including different types of egress, routes, cidrs and many others that I'm leaving out.

I'd like to get comments from the community about pain points for network management and how you would be interested in using kops to manage those- especially associated with the cluster that kops creates/manages. I'm also interested to hear if this seems like a good use of time or not. We could certainly continue to manage the network from within the cluster spec- but

My specific interest in this relates to adding routes to subnets that are created/managed by kops. Currently, I manage them via TF, but this is kind of clunky.

@geojaz
Copy link
Member Author

geojaz commented Dec 23, 2017

@justinsb @chrislovecnm

@chrislovecnm
Copy link
Contributor

Sounds great, but this will be a fun migration. We probably need some tool for this

@chrislovecnm
Copy link
Contributor

Also, we are creating ordering problem. The network will depend on a cluster, and I would like for this to not. IG and secrets depend on a cluster, and it is a pain.

If you create a secret before a cluster it is not handled gracefully.

@geojaz
Copy link
Member Author

geojaz commented Dec 24, 2017

So we had an idea on the call about how to make this less painful for those with existing clusters- essentially to allow the network elements to be specified in both the cluster spec and the network spec and let the cluster spec overload the network spec... just tossing around ideas for now.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 24, 2018
@chrislovecnm
Copy link
Contributor

/lifecycle frozen
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 25, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.
Projects
None yet
Development

No branches or pull requests

4 participants