Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix Harvester disk_size default value #1149

Merged
merged 1 commit into from Jun 21, 2023
Merged

Conversation

futuretea
Copy link
Contributor

@futuretea futuretea commented Jun 15, 2023

Issue:

#1150
harvester/harvester#4097

Problem

The disk_size field is an int in the Harvester node driver and will be converted to 0 by Harvester node driver if it is set to empty by the user.

Refer to https://github.com/harvester/docker-machine-driver-harvester/blob/a43dbc6d7d0091a955813ae18fc907de12b8e316/harvester/flags.go#L56

Solution

Set Harvester disk_size default value to "0" in terraform-provider-rancher2

Testing

Engineering Testing

Manual Testing

  1. Setup a Harvester v1.1.2 cluster, refer to https://docs.harvesterhci.io/v1.1/install/iso-install
  2. Add a cloud image harvester-public/focal-server and a vlan network harvester-public/mgmt-vlan1 to the Harvester by using terraform-provider-harvester
terraform {
  required_version = ">= 0.13"
  required_providers {
    harvester = {
      source  = "harvester/harvester"
      version = "0.6.2"
    }
  }
}

provider "harvester" {
 kubeconfig = "<the kubeconfig file path of the harvester cluster>"
}

resource "harvester_image" "focal-server" {
  name      = "focal-server"
  namespace = "harvester-public"

  display_name = "focal-server-cloudimg-amd64.img"
  source_type  = "download"
  url          = "https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img"
}

data "harvester_clusternetwork" "mgmt" {
  name = "mgmt"
}

resource "harvester_network" "mgmt-vlan1" {
  name      = "mgmt-vlan1"
  namespace = "harvester-public"

  vlan_id = 1

  route_mode           = "auto"
  route_dhcp_server_ip = ""

  cluster_network_name = data.harvester_clusternetwork.mgmt.name
}
  1. Setup a Rancher 2.7.4 server
  2. Import Harvester cluster to the Rancher cluster in Virtualization Management, use foo-harvester as the cluster name
  3. Build terraform-provider-rancher2 from the PR branch
make
  1. install the custom provider as version 0.0.0-dev:
PROVIDER="rancher2"
VERSION="0.0.0-dev"
OS_PLATFORM=$(uname -sp | tr '[:upper:] ' '[:lower:]_' | sed 's/x86_64/amd64/' | sed 's/i386/amd64/' | sed 's/arm/arm64/')
PROVIDERS_DIR=$HOME/.terraform.d/plugins/terraform.local/local/${PROVIDER}
PROVIDER_DIR=${PROVIDERS_DIR}/${VERSION}/${OS_PLATFORM}
mkdir -p ${PROVIDER_DIR}
cp bin/terraform-provider-${PROVIDER} ${PROVIDER_DIR}/terraform-provider-${PROVIDER}_v${VERSION}
  1. Create a guest RKE2 cluster using the following test config:
terraform {
  required_providers {
    rancher2 = {
      source = "terraform.local/local/rancher2"
      version = "0.0.0-dev"
    }
  }
}


provider "rancher2" {
  api_url    = "<change me>"
  access_key = "<change me>"
  secret_key = "<change me>"
  insecure = true
}


data "rancher2_cluster_v2" "foo-harvester" {
  name = "foo-harvester"
}

# Create a new Cloud Credential for an imported Harvester cluster
resource "rancher2_cloud_credential" "foo-harvester" {
  name = "foo-harvester"
  harvester_credential_config {
    cluster_id = data.rancher2_cluster_v2.foo-harvester.cluster_v1_id
    cluster_type = "imported"
    kubeconfig_content = data.rancher2_cluster_v2.foo-harvester.kube_config
  }
}

# Create a new rancher2 machine config v2 using harvester node_driver
resource "rancher2_machine_config_v2" "foo-harvester-v2" {
  generate_name = "foo-harvester-v2"
  harvester_config {
    vm_namespace = "default"
    cpu_count = "2"
    memory_size = "4"
    disk_info = <<EOF
    {
        "disks": [{
            "imageName": "harvester-public/focal-server",
            "size": 40,
            "bootOrder": 1
        }]
    }
    EOF
    network_info = <<EOF
    {
        "interfaces": [{
            "networkName": "harvester-public/mgmt-vlan1"
        }]
    }
    EOF
    ssh_user = "ubuntu"
    user_data = "I2Nsb3VkLWNvbmZpZwpwYWNrYWdlX3VwZGF0ZTogdHJ1ZQpwYWNrYWdlczoKICAtIHFlbXUtZ3Vlc3QtYWdlbnQKICAtIGlwdGFibGVzCnJ1bmNtZDoKICAtIC0gc3lzdGVtY3RsCiAgICAtIGVuYWJsZQogICAgLSAnLS1ub3cnCiAgICAtIHFlbXUtZ3Vlc3QtYWdlbnQuc2VydmljZQo="
  }
}

resource "rancher2_cluster_v2" "foo-harvester-v2" {
  name = "foo-harvester-v2"
  kubernetes_version = "v1.25.9+rke2r1"
  rke_config {
    machine_pools {
      name = "pool1"
      cloud_credential_secret_name = rancher2_cloud_credential.foo-harvester.id
      control_plane_role = true
      etcd_role = true
      worker_role = true
      quantity = 1
      machine_config {
        kind = rancher2_machine_config_v2.foo-harvester-v2.kind
        name = rancher2_machine_config_v2.foo-harvester-v2.name
      }
    }
    machine_selector_config {
      config = {
        cloud-provider-name = ""
      }
    }
    machine_global_config = <<EOF
cni: "calico"
disable-kube-proxy: false
etcd-expose-metrics: false
EOF
    upgrade_strategy {
      control_plane_concurrency = "10%"
      worker_concurrency = "10%"
    }
    etcd {
      snapshot_schedule_cron = "0 */5 * * *"
      snapshot_retention = 5
    }
    chart_values = ""
  }
}
terraform init
terraform apply
  1. Check plan again
terraform apply
  1. Run terraform plan

Automated Testing

QA Testing Considerations

Regressions Considerations

Signed-off-by: futuretea <Hang.Yu@suse.com>
@lanfon72
Copy link
Member

Tested on harvester/harvester#4097 (comment)

Copy link

@guangbochen guangbochen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks.

@Sahota1225 Sahota1225 requested a review from a team June 20, 2023 04:03
@futuretea
Copy link
Contributor Author

Hi @a-blender Thanks for your review. Can you help to merge this PR and the another PR #1152 for release/v3.0 branch and plan to cut a v3.0.2-rc1 release ? After that, Harvester QA team can verify the v3.0.2-rc1 release, If the test of v3.0.2-rc1 are passed, we can cut release v3.0.2 for the customer. Thanks!

@snasovich snasovich merged commit ad8e96e into rancher:master Jun 21, 2023
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants