Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cant upgrade to 1.18.0 (aws / spotinst / terraform) #9679

Closed
scottambroseio opened this issue Aug 4, 2020 · 5 comments · Fixed by #9682
Closed

Cant upgrade to 1.18.0 (aws / spotinst / terraform) #9679

scottambroseio opened this issue Aug 4, 2020 · 5 comments · Fixed by #9682
Assignees

Comments

@scottambroseio
Copy link

👋

1. What kops version are you running? The command kops version, will display
this information.

I0804 11:05:28.933457 10358 featureflag.go:154] FeatureFlag "Spotinst"=true
Version 1.18.0 (git-698bf974d8)

2. What Kubernetes version are you running? kubectl version will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops flag.

Client Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.6-beta.0", GitCommit:"e7f962ba86f4ce7033828210ca3556393c377bcc", GitTreeState:"clean", BuildDate:"2020-01-15T08:26:26Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.9", GitCommit:"4fb7ed12476d57b8437ada90b4f93b17ffaeed99", GitTreeState:"clean", BuildDate:"2020-07-15T16:10:45Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

3. What cloud provider are you using?
AWS (with SpotInst Elsaticgroups)

4. What commands did you run? What is the simplest way to reproduce this issue?

kops upgrade cluster $CLUSTER
kops update cluster --name $CLUSTER
--out "terraform"
--target "terraform"
--create-kube-config="false"
--yes
terraform plan

5. What happened after the commands executed?

Error: Missing required argument

  on kubernetes.tf line 629, in resource "spotinst_elastigroup_aws" "master-eu-west-1a-masters-****":
 629: resource "spotinst_elastigroup_aws" "master-eu-west-1a-masters-****" {

The argument "instance_types_spot" is required, but no definition was found.


Error: Missing required argument

  on kubernetes.tf line 629, in resource "spotinst_elastigroup_aws" "master-eu-west-1a-masters-****":
 629: resource "spotinst_elastigroup_aws" "master-eu-west-1a-masters-****" {

The argument "instance_types_ondemand" is required, but no definition was
found.


Error: Missing required argument

  on kubernetes.tf line 629, in resource "spotinst_elastigroup_aws" "master-eu-west-1a-masters-****":
 629: resource "spotinst_elastigroup_aws" "master-eu-west-1a-masters-****" {

The argument "orientation" is required, but no definition was found.


Error: Missing required argument

  on kubernetes.tf line 629, in resource "spotinst_elastigroup_aws" "master-eu-west-1a-masters-****":
 629: resource "spotinst_elastigroup_aws" "master-eu-west-1a-masters-****" {

The argument "fallback_to_ondemand" is required, but no definition was found.


Error: Missing required argument

  on kubernetes.tf line 629, in resource "spotinst_elastigroup_aws" "master-eu-west-1a-masters-****":
 629: resource "spotinst_elastigroup_aws" "master-eu-west-1a-masters-****" {

The argument "security_groups" is required, but no definition was found.


Error: Missing required argument

  on kubernetes.tf line 678, in resource "spotinst_elastigroup_aws" "nodes-****":
 678: resource "spotinst_elastigroup_aws" "nodes-****" {

The argument "security_groups" is required, but no definition was found.


Error: Missing required argument

  on kubernetes.tf line 678, in resource "spotinst_elastigroup_aws" "nodes-****":
 678: resource "spotinst_elastigroup_aws" "nodes-****" {

The argument "instance_types_ondemand" is required, but no definition was
found.


Error: Missing required argument

  on kubernetes.tf line 678, in resource "spotinst_elastigroup_aws" "nodes-****":
 678: resource "spotinst_elastigroup_aws" "nodes-****" {

The argument "instance_types_spot" is required, but no definition was found.


Error: Missing required argument

  on kubernetes.tf line 678, in resource "spotinst_elastigroup_aws" "nodes-****":
 678: resource "spotinst_elastigroup_aws" "nodes-****" {

The argument "fallback_to_ondemand" is required, but no definition was found.


Error: Missing required argument

  on kubernetes.tf line 678, in resource "spotinst_elastigroup_aws" "nodes-****":
 678: resource "spotinst_elastigroup_aws" "nodes-****" {

The argument "orientation" is required, but no definition was found.

6. What did you expect to happen?

1.17.X generated terraform section

resource "spotinst_elastigroup_aws" "master-eu-west-1a-masters-****" {
  name                   = "master-eu-west-1a.masters.****"
  description            = "master-eu-west-1a.masters.****"
  product                = "Linux/UNIX (Amazon VPC)"
  region                 = "eu-west-1"
  subnet_ids             = ["${aws_subnet.eu-west-1a-****.id}"]
  elastic_load_balancers = ["${aws_elb.api-****.id}"]

  network_interface = {
    description                 = "eth0"
    device_index                = 0
    associate_public_ip_address = false
    delete_on_termination       = true
  }

  ebs_block_device = {
    device_name           = "/dev/xvda"
    volume_type           = "gp2"
    volume_size           = 64
    delete_on_termination = true
  }

  integration_kubernetes = {
    integration_mode   = "pod"
    cluster_identifier = "****"
  }

  tags = {
    key   = "k8s.io/cluster-autoscaler/node-template/label/kops.k8s.io/instancegroup"
    value = "master-eu-west-1a"
  }

  tags = {
    key   = "KubernetesCluster"
    value = "****"
  }

  tags = {
    key   = "Name"
    value = "master-eu-west-1a.masters.****"
  }

  tags = {
    key   = "k8s.io/role/master"
    value = "1"
  }

  tags = {
    key   = "kops.k8s.io/instancegroup"
    value = "master-eu-west-1a"
  }

  min_size                   = 1
  max_size                   = 1
  desired_capacity           = 1
  capacity_unit              = "instance"
  spot_percentage            = 100
  orientation                = "balanced"
  fallback_to_ondemand       = true
  utilize_reserved_instances = true
  instance_types_ondemand    = "m5.large"
  instance_types_spot        = ["m5.large", "m5a.large"]
  enable_monitoring          = false
  image_id                   = "ami-09e51e3726d58e07a"
  health_check_type          = "K8S_NODE"
  security_groups            = ["${aws_security_group.masters-****.id}"]
  user_data                  = "${file("${path.module}/data/spotinst_elastigroup_aws_master-eu-west-1a.masters.****_user_data")}"
  iam_instance_profile       = "${aws_iam_instance_profile.masters-****.id}"
  key_name                   = "${aws_key_pair.kubernetes-****-d63f3e0308a183fdbfc20b1d48d3b7a0.id}"
}

1.18.X generated terraform section

resource "spotinst_elastigroup_aws" "master-eu-west-1a-masters-****" {
  description = "master-eu-west-1a.masters.****"
  ebs_block_device {
    delete_on_termination = true
    device_name           = "/dev/sda1"
    volume_size           = 64
    volume_type           = "gp2"
  }
  elastic_load_balancers = [aws_elb.api-****.id]
  integration_kubernetes {
    cluster_identifier = "****"
    integration_mode   = "pod"
  }
  name = "master-eu-west-1a.masters.****"
  network_interface {
    associate_public_ip_address = false
    delete_on_termination       = true
    description                 = "eth0"
    device_index                = 0
  }
  product    = "Linux/UNIX (Amazon VPC)"
  region     = "eu-west-1"
  subnet_ids = [aws_subnet.eu-west-1a-****.id]
  tags {
    key   = "Name"
    value = "master-eu-west-1a.masters.****"
  }
  tags {
    key   = "KubernetesCluster"
    value = "****"
  }
  tags {
    key   = "kubernetes.io/cluster/****"
    value = "owned"
  }
  tags {
    key   = "k8s.io/cluster-autoscaler/node-template/label/kops.k8s.io/instancegroup"
    value = "master-eu-west-1a"
  }
  tags {
    key   = "k8s.io/role/master"
    value = "1"
  }
  tags {
    key   = "kops.k8s.io/instancegroup"
    value = "master-eu-west-1a"
  }
}

The generated config from 1.18 kops is invalid due to missing required properties in the
spotinst_elastigroup_aws resource. I would expect the generated manifests to be valid so that they can be deployed or for there to be some form of documentation acknowledging that v1.18.0 and terraform doesn't work with SpotInst on AWS at this time.

9. Anything else do we need to know?

Yaml for the instance group

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2020-01-06T09:23:14Z"
  generation: 6
  labels:
    kops.k8s.io/cluster: ****
    spotinst.io/spot-percentage: "100"
  name: master-eu-west-1a
spec:
  image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20200716
  machineType: m5.large,m5a.large
  maxSize: 1
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: master-eu-west-1a
  role: Master
  subnets:
  - eu-west-1a

Terraform v0.12.24

@rifelpet
Copy link
Member

rifelpet commented Aug 4, 2020

Hi @scottrangerio thanks for the report. I've identified the bug which relates to the new support for Terraform 0.12. Until the fix is released in a Kops 1.18.X soon, can you try using the json output feature flag? The json format will still work with Terraform 0.12 but I believe won't be susceptible to this bug

export KOPS_FEATURE_FLAGS=+TerraformJSON,-Terraform-0.12
kops update cluster --target terraform ...

Also when I get a fix in place would you be able to test a build from master? You wouldn't need to actually apply the terraform changes, just confirm that the new fields show up correctly in kubernetes. I don't have spotinst access so it can be difficult for me to test this area of code.

/assign

@scottambroseio
Copy link
Author

@rifelpet Hey, that's not a problem. I'll give the json flags a go, thanks!

Yeah, I'd be more than happy to test a master build and report back on the tf that's generated, just drop me a mention as and when 👍

@rifelpet
Copy link
Member

rifelpet commented Aug 4, 2020

@scottrangerio I've got a fix merged in master and cherry-picked to 1.18 so it should land in kops 1.18.1 (currently unknown ETA). Are you able to build kops yourself? If not can you run it on linux? The prow jobs build a linux kops binary that you can pull down yourself and run, but if you're on mac we'll need to build it ourselves.

@rifelpet
Copy link
Member

rifelpet commented Aug 4, 2020

@scottambroseio
Copy link
Author

👋 - Thanks for this and the prompt action! I'm running windows with WSL so linux binaries are fine.

I can confirm now that the generated tf is valid and doing a terraform plan now succeeds without any missing property errors

resource "spotinst_elastigroup_aws" "master-eu-west-1a-masters-****" {
  capacity_unit    = "instance"
  description      = "master-eu-west-1a.masters.****"
  desired_capacity = 1
  ebs_block_device {
    delete_on_termination = true
    device_name           = "/dev/sda1"
    volume_size           = 64
    volume_type           = "gp2"
  }
  elastic_load_balancers  = [aws_elb.api-****.id]
  enable_monitoring       = false
  fallback_to_ondemand    = true
  health_check_type       = "K8S_NODE"
  iam_instance_profile    = aws_iam_instance_profile.masters-****.id
  image_id                = "ami-0127d62154efde733"
  instance_types_ondemand = "m5.large"
  instance_types_spot     = ["m5.large", "m5a.large"]
  integration_kubernetes {
    cluster_identifier = "****"
    integration_mode   = "pod"
  }
  key_name = aws_key_pair.kubernetes-****.id
  max_size = 1
  min_size = 1
  name     = "master-eu-west-1a.masters.****"
  network_interface {
    associate_public_ip_address = false
    delete_on_termination       = true
    description                 = "eth0"
    device_index                = 0
  }
  orientation     = "balanced"
  product         = "Linux/UNIX (Amazon VPC)"
  region          = "eu-west-1"
  security_groups = [aws_security_group.masters-****.id]
  spot_percentage = 100
  subnet_ids      = [aws_subnet.eu-west-1a-****.id]
  tags {
    key   = "Name"
    value = "master-eu-west-1a.masters.****"
  }
  tags {
    key   = "KubernetesCluster"
    value = "****"
  }
  tags {
    key   = "kubernetes.io/cluster/****"
    value = "owned"
  }
  tags {
    key   = "k8s.io/cluster-autoscaler/node-template/label/kops.k8s.io/instancegroup"
    value = "master-eu-west-1a"
  }
  tags {
    key   = "k8s.io/role/master"
    value = "1"
  }
  tags {
    key   = "kops.k8s.io/instancegroup"
    value = "master-eu-west-1a"
  }
  user_data                  = file("${path.module}/data/spotinst_elastigroup_aws_master-eu-west-1a.masters.****_user_data")
  utilize_reserved_instances = true
}

Thanks 👍 👍 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants