-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Should be able to skip DNS creation #172
Comments
This is a good suggestion, but the way we run etcd in HA mode follows the way CoreOS recommends running it on clouds, which requires DNS: etcd-io/etcd#5418 DNS also gives us some other benefits, like not requiring a load balancer for HA, and we can also configure DNS for k8s resources. Admittedly these are only for HA though, and we could probably get by without it for non-HA. But I'm wary of introducing another dimension, when we get benefits if we use DNS. One option I do think is interesting is to set it up so that Route53 is not used externally, so it is just an implementation detail inside the cluster. Also I probably should document how to use Route53 without moving your primary DNS hosting! Which DNS provider do you use @mattjonesorg ? We're going to switch to the dnsprovider in-core anwyay (#26) so it may be your preferred provider will get support anyway, so it can work with federation. |
We are early in our migration. DNS is managed on-premise right now. I'm not sure what specific software our sysadmins use to manage DNS at the moment. For the moment, I'd be happy if I had a command that would spin up the cluster so I can access it via IP address. |
Marking this as P0 because it is important, but it will likely be a docs fix to explain how to set up Route53 minimally; at least for now. |
How about in AWS partitions that don't have Route 53 at all (gov cloud)? |
@justinsb just like what @chrislovecnm talks with me in this issue #794 several days ago, we have not Route53 support in our region, and there is no plan for it from aws. so we really hope Currently, to work around the problem, I just change the code related with Route53 and make it generates |
I'll +1 this since it will be useful for me that I don't have access to route 53. For me it would be completely fine if to generate the terraform configuration but even with @yancl would you mind telling what things you changed to skip Route 53? |
@danielfrg well, when i run kops with BTW, i don't think it is a good way to do things like this, but to see what the TF are, maybe it is the fastest way:) |
@danielfrg I make some changes to my fork of the kops and it can run following are the processes:
Well, currently the |
+1 for me as I do not wish to transfer my DNS over to Route53 just so I can run a dev cluster on AWS. I just create the DNS routes by hand after the cluster's been stood up. Just looking for a non-HA, no-strings-attached kubernetes cluster similar to kube-up's experience for quick dev setups on AWS. |
In 1.5, we'll support private hostedzones, so you don't need a public domain name. But this exposes the problem that then you have no way to lookup your cluster (unless you are on a VPN / tunnel to your AWS VPC). One option is to create an ELB, and then to use the AWS assigned ELB hostname. We do create ELBs in private topologies, but we still require DNS based discover of the ELB. |
That is acceptable for me! Our infra is set up to provision an ELB after everything is stood up, which then we target DNS records at the AWS-assigned ELB hostname. |
Jumping on the no-Route 53 bandwagon. My company currently manages our own DNS with servers we run in EC2. I'd like to be able to configure DNS for the cluster there, and not have to hook into Route 53. @bacongobbler If you're looking for quick no-strings-attached Kubernetes, you might want to look into kubeadm. It's the Kubernetes-official replacement for kube-up.sh. It's worked great for me. |
Hi @geekofalltrades so first of all, I agree! DNS is a painful dependency. It is the worst option, except for everything else we've tried :-) DNS is the magic that makes HA, upgrades and rolling-updates, and recover-from-lights-off possible. We talked about this the other day in the kops channel, so any & all suggestions are welcome! What we've done in kops 1.5.1 (which was just released) is to make it very easy to use a private route53 hosted zone (which also means no domain name requirement), and you can then assign a CNAME/ALIAS to your ELB externally (if you want to). You're still technically using Route53, but there's no setup steps and it's just an implementation detail - think of it as DynamoDB with a querying interface over DNS ;-) I'd love to figure out a way around requiring you to set up a friendly DNS name to that ELB such that we could have no DNS steps at all. Many of our users are running fully private topologies, where they have a VPN into the VPC that is DNS-enabled, and that does then work without further DNS tweaks. However, it seems that there are some mis-characterizations of kubeadm here, and I hope you don't mind if I address it - there's too much vendor FUD directed at kops these days, so I apologize if I let my frustration show. It isn't directed at you :-) kubeadm is (by design) a building block, not a turnkey installation tool. kops will be building on it, as will other turnkey tools (e.g. kargo). The distinction is that the turnkey tools are infrastructure aware, so they will help you with the complete configuration, rather than making you do the infrastructure separately and cross-configure the tools. If the existing tooling doesn't meet your requirements (for example, if you have a bare-metal inventory system that manages your infrastructure) then you probably are forced to roll your own tooling, and you should consider using kubeadm for the commonalities, rather than rolling your own top-to-bottom. kubeadm is one output from sig-cluster-lifecycle's work (of which the kops team is part) to identify the commonalities between tools such as kops, kargo, kube-aws/tectonic etc and extract them into common components, and to identify the areas which make kubernetes installation hard and simplify them. For example, the sig-cluster-lifecycle/kubeadm team has led the work on adding certificate management into the k8s API. kops is working with sig-cluster-lifecycle to figure out how to better manage addons (the current system has some shortcomings, most notably with HA configurations). However, note that kubeadm is not complete, and that at the current time, kubeadm is unsupported with cloudproviders such as AWS. It is also very much not the replacement for kube-up.sh: the scope is very different. The people that maintained kube-up.sh for AWS are primarily working on kops. Also note that most of the people in the AWS community that offer their assistance are much more familiar with "standard" installations - using kops, kube-aws, kube-up or kargo - so I personally recommend one of these tools so that you can better benefit from, and contribute to - our happy kubernetes on AWS community. And, as always: come hang out in sig-aws on slack (or kops if you are a kops user or just kops-curious)! |
Really appreciate your effort to remove DNS and add ELB as the solution. Easy for us to setup a ELB and mapping to the internal DNS name but to create a public DNS during K8s setup is very painful. I am new to K8s, and from "Running Kubernetes on AWS EC2" https://kubernetes.io/docs/getting-started-guides/aws/, I was thinking kops should be the best choice to setup K8s, however eventually I found "kube-up bash script" is my good friend, easy to configure and start the environment in ten mins. |
This will be needed in baremetal. I was thinking that we add a field in the instance group where you define your server names? Basically a list that the admin can maintain. @justinsb thoughts? |
This feature is also necessary for deploying to AWS zones where route53 is not available such as AWS GovCloud |
Is this process documented, with commands and proper flags, somewhere? I haven't found a good one. |
We have a PR in for using gossip, which will allow us to bypass DNS! It requires a full build of head and protokube. Reach out if you need instructions!! |
Can we close this now that #2327 is in? |
Closing and opening an issue to document how to use it. |
@chrislovecnm @justinsb - just to clarify: does this mean that I can now create a K8S cluster without having a public domain name on AWS? You asked to reach out in case someone needed instructions - is there a place where this process is documented? |
Also wondering if this is good to go. |
Is it some secret or what? can someone please provide the flags to create cluster without public domain? |
Hope that this is what you want, IIUC:) |
Other companies might be like ours, where we are in the early stages of migration, and DNS is handled outside of AWS at the moment. That will change for us, but it seems that the kops tool should not require Route53 access in order to create a cluster.
I'm thinking of a skip-dns=true option on the kops create cluster command.
The text was updated successfully, but these errors were encountered: