Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Terraform Doesn't Work With Private Hosted Zones #1885

Closed
jamesgoodhouse opened this issue Feb 13, 2017 · 13 comments
Closed

Terraform Doesn't Work With Private Hosted Zones #1885

jamesgoodhouse opened this issue Feb 13, 2017 · 13 comments

Comments

@jamesgoodhouse
Copy link

@jamesgoodhouse jamesgoodhouse commented Feb 13, 2017

I am attempting to use kops a private topology cluster with a private dns, however I receive the message Route53 private hosted zones are not supported for terraform. It is unclear if this means I need to instead create Route53 entries manually prior to trying to export to TF using kops, or if kops is just unable to run this command at all.

@mboret
Copy link

@mboret mboret commented Feb 15, 2017

I've the same issue, with private or public Route53 zone, I'm receving the same error. Route53 private hosted zones are not supported for terraform.
I'm executing this command:
kops create cluster --vpc vpc-dc26b8 k8s-test.internal --zones=eu-west-1a,eu-west-1b,eu-west-1c --dns private --node-count 2 --master-zones eu-west-1a --node-size t2.small --master-size t2.medium --out=. --target=terraform --state s3://s3.k8s.test

Kops Version 1.5.1 (git-01deca8)

The same command without --out=. --target=terraform works.

@banchee
Copy link

@banchee banchee commented Feb 15, 2017

Im getting the same issue, kops 1.5.1

kops create cluster --name=dev.pdg.io --cloud=aws --target=terraform --out=. --state=s3://pdg-kube-aws --zones=eu-central-1a --dns-zone=dev.*****.io --node-size=t2.small --master-size=t2.small --dns=private

@chrislovecnm
Copy link
Member

@chrislovecnm chrislovecnm commented Feb 19, 2017

@justinsb any ideas?

@bregtcoenen
Copy link

@bregtcoenen bregtcoenen commented Mar 13, 2017

Any updates here? I am having the same issue

@jrnt30
Copy link
Contributor

@jrnt30 jrnt30 commented Mar 21, 2017

I believe this is a duplicate of #1848. There is a concern around how to "manage" or "acquire" information about that private hosted zone with Terraform in a repeatable and safe way.

@bregtcoenen
Copy link

@bregtcoenen bregtcoenen commented Mar 23, 2017

I ended up setting up my VPC, subnets, route tables, DNS private zone, ... manually with terraform.
I tagged my route-tables, igw, and subnets with the following tag.
KubernetesCluster = "${var.kops_cluster_name}"
Obviously, var.kops_cluster_name is the name of my cluster.

In my kops kops create cluster command I specified my VPC, network and private dns zone created by terraform. I didn't specify terraform output here.

After the creation, I've updated the cluster configuration so that the subnets match the ones I 've created.

  • cidr: 10.200.32.0/19
    id: subnet-e43xxxxx
    name: eu-central-1a
    type: Private
    zone: eu-central-1a

After that I applied the changes.

At this point I can manage my kubernetes resources with kops and my other resources with terraform.

I found most of the required info to get here in this post https://github.com/kubernetes/kops/blob/master/docs/run_in_existing_vpc.md

@jrnt30
Copy link
Contributor

@jrnt30 jrnt30 commented Apr 7, 2017

I believe this has been addressed in #2297

@j0sh3rs
Copy link

@j0sh3rs j0sh3rs commented Jun 14, 2017

This issue is still happening in 1.6.1

@hristovpln
Copy link

@hristovpln hristovpln commented Nov 2, 2017

Just tried with:
--dns private
.. and it worked
kops v1.7.0

@chrislovecnm
Copy link
Member

@chrislovecnm chrislovecnm commented Nov 2, 2017

Closing, please use 1.7.1 kops, as it has a cve patch in it

@timurkhafizov
Copy link

@timurkhafizov timurkhafizov commented Dec 8, 2017

Running the simple
kops create cluster --name=sample.domain.com --zones=eu-west-1a --target=terraform --out=./kops/sample --dns=private --dns-zone=domain.com
fails with output:

...
I1208 11:05:58.879563   28104 dnszone.go:242] Check for existing route53 zone to re-use with name "domain.com"
W1208 11:05:59.001850   28104 executor.go:109] error running task "DNSZone/domain.com" (9m57s remaining to succeed): Creation of Route53 hosted zones is not supported for terraform
I1208 11:05:59.001887   28104 executor.go:124] No progress made, sleeping before retrying 1 failed task(s)
...

I am using kops 1.8.0.
Has it been fixed in 1.8.0 as well?

@admssa
Copy link

@admssa admssa commented Feb 25, 2018

kops 1.8.1 the same.
terraform export isn't working for private topology

kops create cluster \
              --state=s3://bucket\
              --name=k8s.domain.local \
              --dns=private \
              --dns-zone=domain.local \
              --master-size=t2.medium \
              --node-size=t2.medium \
              --zones=eu-west-1a \
              --master-zones=eu-west-1a \
              --node-count=3 \
              --master-count=1 \
              --image=ami-a61464df \
              --master-volume-size=50 \
              --node-volume-size=50 \
              --topology=private \
              --networking=calico \
	      --ssh-public-key=~/.ssh/development-kubernetes.pub \
	      --api-loadbalancer-type=internal \
              --kubernetes-version=1.9.3 \
              --network-cidr=10.253.0.0/16 \
              --out=../../../../../terraform/k8s \
              --target=terraform

Output:

I0225 19:41:15.567792   30890 dnszone.go:242] Check for existing route53 zone to re-use with name "domain.local"
W0225 19:41:17.100616   30890 executor.go:109] error running task "DNSZone/domain.local" (9m14s remaining to succeed): Creation of Route53 hosted zones is not supported for terraform
I0225 19:41:17.100707   30890 executor.go:124] No progress made, sleeping before retrying 1 failed task(s)

Zone exist

@cobravsninja
Copy link

@cobravsninja cobravsninja commented Jun 5, 2018

Had the same issue before payed attention to the following statement:

The only requirement to trigger this is to have the cluster name end with .k8s.local

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests