Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] The harvester_config of 1.25.0 and 3.0.0 are incompatible #1131

Closed
futuretea opened this issue May 27, 2023 · 2 comments
Closed

[BUG] The harvester_config of 1.25.0 and 3.0.0 are incompatible #1131

futuretea opened this issue May 27, 2023 · 2 comments
Assignees
Labels

Comments

@futuretea
Copy link
Contributor

Describe the bug

The harvester_config of 1.25.0 and 3.0.0 are incompatible

To Reproduce

  1. Create a guest cluster using terraform rancher2 provider 1.25.0
  2. Upgrade rancher2 provider from 1.25.0 to 3.0.0
  3. terraform plan
❯ terraform plan
╷
│ Error: Missing required argument
│
│   on main.tf line 37, in resource "rancher2_machine_config_v2" "foo-harvester-v2":
│   37:   harvester_config {
│
│ The argument "disk_info" is required, but no definition was found.
╵
╷
│ Error: Missing required argument
│
│   on main.tf line 37, in resource "rancher2_machine_config_v2" "foo-harvester-v2":
│   37:   harvester_config {
│
│ The argument "network_info" is required, but no definition was found.

If the user wants to apply, they have to modify to the new format, modify to the new format and then apply, the Harvester guest cluster will be re-provisioned by Rancher due to machine_config fields change.

Actual Result

The user needs to change the field format of harvester_config

Expected Result

The new fields disk_info and network_info are not force required.
User can still use old fields
terraform apply should not cause cluster to be re-provisioned

Screenshots

Additional context

@noahgildersleeve
Copy link

Used terraform provider v1.25.0 to provision rancher2 provider on v1.1.1 server with Rancher v2.6.11. Upgraded from v1.1.1 -> v1.1.2 and Rancher v2.6.11 -> v2.7.4. Upgraded provider to v3.0.1-rc1. Was able to update plugins and re-apply terraform. It looks good.

I had some slight issues in my local environment The terraform init --upgrade worked fine and I was able to run terraform apply fine. However my RKE clusters had some issues when spinning down the nodes and replacing it with a new node in the cluster. I don't think that's related to the terraform though. I had some resource issues on my virtual environment and now it's a little hung up. That's an issue with Harvester though and it being in a weird state. I'm pretty sure that is due to it being one node and virtualized.

The VMs are stuck in deleting status

@noahgildersleeve
Copy link

Closing as verified

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants