Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support old version HarvesterConfig #1132

Merged
merged 1 commit into from Jun 2, 2023
Merged

Conversation

futuretea
Copy link
Contributor

@futuretea futuretea commented May 27, 2023

Issue:

#1131

Problem

In the previous PR, forcing the user to use the new field caused the user's guest cluster to be rebuilt

Solution

  1. set old fields conflict with new fields
  2. change new fields to be optional and conflict with old fields
  3. user need to configure at least one new field and one old field

Testing

Engineering Testing

Manual Testing

  1. Setup a Harvester v1.1.2 cluster, refer to https://docs.harvesterhci.io/v1.1/install/iso-install
  2. Add a cloud image harvester-public/focal-server and a vlan network harvester-public/mgmt-vlan1 to the Harvester by using terraform-provider-harvester
terraform {
  required_version = ">= 0.13"
  required_providers {
    harvester = {
      source  = "harvester/harvester"
      version = "0.6.2"
    }
  }
}

provider "harvester" {
 kubeconfig = "<the kubeconfig file path of the harvester cluster>"
}

resource "harvester_image" "focal-server" {
  name      = "focal-server"
  namespace = "harvester-public"

  display_name = "focal-server-cloudimg-amd64.img"
  source_type  = "download"
  url          = "https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img"
}

data "harvester_clusternetwork" "mgmt" {
  name = "mgmt"
}

resource "harvester_network" "mgmt-vlan1" {
  name      = "mgmt-vlan1"
  namespace = "harvester-public"

  vlan_id = 1

  route_mode           = "auto"
  route_dhcp_server_ip = ""

  cluster_network_name = data.harvester_clusternetwork.mgmt.name
}
  1. Setup a Rancher 2.6.11 server
  2. Import Harvester cluster to the Rancher cluster in Virtualization Management, use foo-harvester as the cluster name
  3. Create a guest RKE2 cluster using the following test config:
terraform {
  required_providers {
    rancher2 = {
      source = "rancher/rancher2"
      version = "1.25.0"
    }
  }
}


provider "rancher2" {
  api_url    = "<change me>"
  access_key = "<change me>"
  secret_key = "<change me>"
  insecure = true
}


data "rancher2_cluster_v2" "foo-harvester" {
  name = "foo-harvester"
}

# Create a new Cloud Credential for an imported Harvester cluster
resource "rancher2_cloud_credential" "foo-harvester" {
  name = "foo-harvester"
  harvester_credential_config {
    cluster_id = data.rancher2_cluster_v2.foo-harvester.cluster_v1_id
    cluster_type = "imported"
    kubeconfig_content = data.rancher2_cluster_v2.foo-harvester.kube_config
  }
}

# Create a new rancher2 machine config v2 using harvester node_driver
resource "rancher2_machine_config_v2" "foo-harvester-v2" {
  generate_name = "foo-harvester-v2"
  harvester_config {
    vm_namespace = "default"
    cpu_count = "2"
    memory_size = "4"
    disk_size = "40"
    image_name = "harvester-public/focal-server"
    network_name = "harvester-public/mgmt-vlan1"
    ssh_user = "ubuntu"
    user_data = "I2Nsb3VkLWNvbmZpZwpwYWNrYWdlX3VwZGF0ZTogdHJ1ZQpwYWNrYWdlczoKICAtIHFlbXUtZ3Vlc3QtYWdlbnQKICAtIGlwdGFibGVzCnJ1bmNtZDoKICAtIC0gc3lzdGVtY3RsCiAgICAtIGVuYWJsZQogICAgLSAnLS1ub3cnCiAgICAtIHFlbXUtZ3Vlc3QtYWdlbnQuc2VydmljZQo="
  }
}

resource "rancher2_cluster_v2" "foo-harvester-v2" {
  name = "foo-harvester-v2"
  kubernetes_version = "v1.24.8+rke2r1"
  rke_config {
    machine_pools {
      name = "pool1"
      cloud_credential_secret_name = rancher2_cloud_credential.foo-harvester.id
      control_plane_role = true
      etcd_role = true
      worker_role = true
      quantity = 1
      machine_config {
        kind = rancher2_machine_config_v2.foo-harvester-v2.kind
        name = rancher2_machine_config_v2.foo-harvester-v2.name
      }
    }
    machine_selector_config {
      config = {
        cloud-provider-name = ""
      }
    }
    machine_global_config = <<EOF
cni: "calico"
disable-kube-proxy: false
etcd-expose-metrics: false
EOF
    upgrade_strategy {
      control_plane_concurrency = "10%"
      worker_concurrency = "10%"
    }
    etcd {
      snapshot_schedule_cron = "0 */5 * * *"
      snapshot_retention = 5
    }
    chart_values = ""
  }
}
terraform init
terraform apply
  1. Build terraform-provider-rancher2 from the PR branch
make
  1. install the custom provider as version 0.0.0-dev:
PROVIDER="rancher2"
VERSION="0.0.0-dev"
OS_PLATFORM=$(uname -sp | tr '[:upper:] ' '[:lower:]_' | sed 's/x86_64/amd64/' | sed 's/i386/amd64/' | sed 's/arm/arm64/')
PROVIDERS_DIR=$HOME/.terraform.d/plugins/terraform.local/local/${PROVIDER}
PROVIDER_DIR=${PROVIDERS_DIR}/${VERSION}/${OS_PLATFORM}
mkdir -p ${PROVIDER_DIR}
cp bin/terraform-provider-${PROVIDER} ${PROVIDER_DIR}/terraform-provider-${PROVIDER}_v${VERSION}
  1. change provider version to 0.0.0-dev
terraform {
  required_providers {
    rancher2 = {
      source = "terraform.local/local/rancher2"
      version = "0.0.0-dev"
    }
  }
}
terraform init -upgrade
  1. Run terraform plan
data.rancher2_cluster.foo-harvester: Reading...
rancher2_machine_config_v2.foo-harvester-v2: Refreshing state... [id=fleet-default/nc-foo-harvester-v2-q2l6d]
data.rancher2_cluster.foo-harvester: Read complete after 1s [id=local]
rancher2_cloud_credential.foo-harvester: Refreshing state... [id=cattle-global-data:cc-fln6f]
rancher2_cluster_v2.foo-harvester-v2: Refreshing state... [id=fleet-default/foo-harvester-v2]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # rancher2_cloud_credential.foo-harvester will be updated in-place
  ~ resource "rancher2_cloud_credential" "foo-harvester" {
        id          = "cattle-global-data:cc-fln6f"
        name        = "foo-harvester"
        # (3 unchanged attributes hidden)

      ~ harvester_credential_config {
          ~ kubeconfig_content = (sensitive value)
            # (2 unchanged attributes hidden)
        }
    }

  # rancher2_machine_config_v2.foo-harvester-v2 will be updated in-place
  ~ resource "rancher2_machine_config_v2" "foo-harvester-v2" {
        id               = "fleet-default/nc-foo-harvester-v2-q2l6d"
        name             = "nc-foo-harvester-v2-q2l6d"
        # (6 unchanged attributes hidden)

      ~ harvester_config {
          - disk_bus      = "virtio" -> null
          - network_model = "virtio" -> null
            # (8 unchanged attributes hidden)
        }
    }

Plan: 0 to add, 2 to change, 0 to destroy.
╷
│ Warning: "harvester_config.0.image_name": [DEPRECATED] Use disk_info instead
│
│   with rancher2_machine_config_v2.foo-harvester-v2,
│   on main.tf line 34, in resource "rancher2_machine_config_v2" "foo-harvester-v2":
│   34: resource "rancher2_machine_config_v2" "foo-harvester-v2" {
│
│ (and one more similar warning elsewhere)
╵
╷
│ Warning: "harvester_config.0.disk_size": [DEPRECATED] Use disk_info instead
│
│   with rancher2_machine_config_v2.foo-harvester-v2,
│   on main.tf line 34, in resource "rancher2_machine_config_v2" "foo-harvester-v2":
│   34: resource "rancher2_machine_config_v2" "foo-harvester-v2" {
│
│ (and one more similar warning elsewhere)
╵
╷
│ Warning: "harvester_config.0.network_name": [DEPRECATED] Use network_info instead
│
│   with rancher2_machine_config_v2.foo-harvester-v2,
│   on main.tf line 34, in resource "rancher2_machine_config_v2" "foo-harvester-v2":
│   34: resource "rancher2_machine_config_v2" "foo-harvester-v2" {
│
│ (and one more similar warning elsewhere)
╵

─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run you run "terraform apply" now.

Note the default value of disk_bus and network_model changed from virtio to empty string, we need to add the missing fields disk_bus and network_model to avoid unnecessary updates.

  1. Add missing fields disk_bus and network_model
resource "rancher2_machine_config_v2" "foo-harvester-v2" {
  generate_name = "foo-harvester-v2"
  harvester_config {
    vm_namespace = "default"
    cpu_count = "2"
    memory_size = "4"
    disk_size = "40"
    image_name = "harvester-public/focal-server"
    network_name = "harvester-public/mgmt-vlan1"
    disk_bus = "virtio"
    network_model = "virtio"
    ssh_user = "ubuntu"
    user_data = "I2Nsb3VkLWNvbmZpZwpwYWNrYWdlX3VwZGF0ZTogdHJ1ZQpwYWNrYWdlczoKICAtIHFlbXUtZ3Vlc3QtYWdlbnQKICAtIGlwdGFibGVzCnJ1bmNtZDoKICAtIC0gc3lzdGVtY3RsCiAgICAtIGVuYWJsZQogICAgLSAnLS1ub3cnCiAgICAtIHFlbXUtZ3Vlc3QtYWdlbnQuc2VydmljZQo="
  }
}
  1. check again
terraform plan
  1. RKE2 cluster should not be rebuilt after terraform apply
terraform apply
data.rancher2_cluster.foo-harvester: Reading...
rancher2_machine_config_v2.foo-harvester-v2: Refreshing state... [id=fleet-default/nc-foo-harvester-v2-q2l6d]
data.rancher2_cluster.foo-harvester: Read complete after 2s [id=local]
rancher2_cloud_credential.foo-harvester: Refreshing state... [id=cattle-global-data:cc-fln6f]
rancher2_cluster_v2.foo-harvester-v2: Refreshing state... [id=fleet-default/foo-harvester-v2]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # rancher2_cloud_credential.foo-harvester will be updated in-place
  ~ resource "rancher2_cloud_credential" "foo-harvester" {
        id          = "cattle-global-data:cc-fln6f"
        name        = "foo-harvester"
        # (3 unchanged attributes hidden)

      ~ harvester_credential_config {
          ~ kubeconfig_content = (sensitive value)
            # (2 unchanged attributes hidden)
        }
    }

Plan: 0 to add, 1 to change, 0 to destroy.
╷
│ Warning: "harvester_config.0.disk_bus": [DEPRECATED] Use disk_info instead
│
│   with rancher2_machine_config_v2.foo-harvester-v2,
│   on main.tf line 34, in resource "rancher2_machine_config_v2" "foo-harvester-v2":
│   34: resource "rancher2_machine_config_v2" "foo-harvester-v2" {
│
│ (and one more similar warning elsewhere)
╵
╷
│ Warning: "harvester_config.0.image_name": [DEPRECATED] Use disk_info instead
│
│   with rancher2_machine_config_v2.foo-harvester-v2,
│   on main.tf line 34, in resource "rancher2_machine_config_v2" "foo-harvester-v2":
│   34: resource "rancher2_machine_config_v2" "foo-harvester-v2" {
│
│ (and one more similar warning elsewhere)
╵
╷
│ Warning: "harvester_config.0.network_name": [DEPRECATED] Use network_info instead
│
│   with rancher2_machine_config_v2.foo-harvester-v2,
│   on main.tf line 34, in resource "rancher2_machine_config_v2" "foo-harvester-v2":
│   34: resource "rancher2_machine_config_v2" "foo-harvester-v2" {
│
│ (and one more similar warning elsewhere)
╵
╷
│ Warning: "harvester_config.0.disk_size": [DEPRECATED] Use disk_info instead
│
│   with rancher2_machine_config_v2.foo-harvester-v2,
│   on main.tf line 34, in resource "rancher2_machine_config_v2" "foo-harvester-v2":
│   34: resource "rancher2_machine_config_v2" "foo-harvester-v2" {
│
│ (and one more similar warning elsewhere)
╵
╷
│ Warning: "harvester_config.0.network_model": [DEPRECATED] Use network_info instead
│
│   with rancher2_machine_config_v2.foo-harvester-v2,
│   on main.tf line 34, in resource "rancher2_machine_config_v2" "foo-harvester-v2":
│   34: resource "rancher2_machine_config_v2" "foo-harvester-v2" {
│
│ (and one more similar warning elsewhere)
╵

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

rancher2_cloud_credential.foo-harvester: Modifying... [id=cattle-global-data:cc-fln6f]
rancher2_cloud_credential.foo-harvester: Modifications complete after 2s [id=cattle-global-data:cc-fln6f]

Apply complete! Resources: 0 added, 1 changed, 0 destroyed.
  1. migrate fields to the new format
resource "rancher2_machine_config_v2" "foo-harvester-v2" {
  generate_name = "foo-harvester-v2"
  harvester_config {
    vm_namespace = "default"
    cpu_count = "2"
    memory_size = "4"
    disk_info = <<EOF
    {
        "disks": [{
            "imageName": "harvester-public/focal-server",
            "size": 40,
            "bootOrder": 1
        }]
    }
    EOF
    network_info = <<EOF
    {
        "interfaces": [{
            "networkName": "harvester-public/mgmt-vlan1"
        }]
    }
    EOF
    ssh_user = "ubuntu"
    user_data = "I2Nsb3VkLWNvbmZpZwpwYWNrYWdlX3VwZGF0ZTogdHJ1ZQpwYWNrYWdlczoKICAtIHFlbXUtZ3Vlc3QtYWdlbnQKICAtIGlwdGFibGVzCnJ1bmNtZDoKICAtIC0gc3lzdGVtY3RsCiAgICAtIGVuYWJsZQogICAgLSAnLS1ub3cnCiAgICAtIHFlbXUtZ3Vlc3QtYWdlbnQuc2VydmljZQo="
  }
}
  1. RKE2 cluster should be rebuilt after terraform apply
terraform apply

Automated Testing

QA Testing Considerations

Regressions Considerations

Signed-off-by: futuretea <Hang.Yu@suse.com>
@futuretea futuretea marked this pull request as ready for review May 27, 2023 09:55
@futuretea futuretea marked this pull request as draft May 27, 2023 12:03
@futuretea futuretea marked this pull request as ready for review May 27, 2023 14:37
Copy link

@guangbochen guangbochen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, please help to create a related issue on the Harvester side and ask our QA to do an early validation, thanks.

@lanfon72
Copy link
Member

Tested on harvester/harvester#3997 (comment)

@Sahota1225 Sahota1225 requested a review from a team June 1, 2023 15:55
Copy link
Contributor

@a-blender a-blender left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this works in testing, LGTM

@futuretea
Copy link
Contributor Author

Hi, @a-blender, Thanks for your review, can you plan to cut a new release include the PR? This is needed by the customer. Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants