Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support dns=private with terraform #1848

Closed
justinsb opened this issue Feb 10, 2017 · 3 comments · Fixed by #2297
Closed

Support dns=private with terraform #1848

justinsb opened this issue Feb 10, 2017 · 3 comments · Fixed by #2297

Comments

@justinsb
Copy link
Member

@justinsb justinsb commented Feb 10, 2017

We explicitly disallow this, because we don't (I don't) know how to tell terraform to use but not delete a zone. But this should not be that hard to fix!

@ivanalonzo
Copy link

@ivanalonzo ivanalonzo commented Mar 13, 2017

@justinsb any thoughts on when this may get addressed? I'd like to vote this up 👍

@jrnt30
Copy link
Contributor

@jrnt30 jrnt30 commented Mar 15, 2017

I think one way to approach this would be the use of a Data Source, which is essentially a dynamic lookup. Instead of KOPS generated TF "owning" the Hosted Zone, it would instead look it up.

I hacked on this a little bit (which as the notes say is not functional), however it took a similar approach to the resources.

If this seems like something useful/interesting to you, I could take a deeper look into all the places that the DNS Zone is used/inferred.

@andreychernih
Copy link
Contributor

@andreychernih andreychernih commented Mar 31, 2017

I am having the same problem here. My Route53 zone is private and kops fails to resolve it at the time when it creates a new cluster. I agree that using data source is the right approach to discover private zone in Terraform. This is how our re-usable Terraform module looks like (we've created it based on output provided by kops):

data "aws_route53_zone" "main" {
  name = "${var.domain}"
  private_zone = true
}

# Main Route53 zone is a private zone which means it should be associated with kubernetes VPC
resource "aws_route53_zone_association" "secondary" {
  zone_id = "${data.aws_route53_zone.main.zone_id}"
  vpc_id = "${aws_vpc.kubernetes.id}"
}

resource "aws_route53_record" "api-kubernetes" {
  name = "api.${var.name}.${var.domain}"
  type = "A"

  alias = {
    name                   = "${aws_elb.api-kubernetes.dns_name}"
    zone_id                = "${aws_elb.api-kubernetes.zone_id}"
    evaluate_target_health = false
  }

  zone_id = "${data.aws_route53_zone.main.zone_id}"
}

I can wrap up a PR if it makes sense.

ahl pushed a commit to ahl/kops that referenced this issue Apr 6, 2017
ahl pushed a commit to ahl/kops that referenced this issue Apr 6, 2017
ahl pushed a commit to ahl/kops that referenced this issue Apr 6, 2017
justinsb pushed a commit to justinsb/kops that referenced this issue Apr 7, 2017
justinsb added a commit that referenced this issue Apr 7, 2017
Support dns=private with terraform #1848
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants