Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

opennebula_service_template: Improve idempotence #468

Closed
sk4zuzu opened this issue Jul 13, 2023 · 2 comments · Fixed by #508
Closed

opennebula_service_template: Improve idempotence #468

sk4zuzu opened this issue Jul 13, 2023 · 2 comments · Fixed by #508

Comments

@sk4zuzu
Copy link
Contributor

sk4zuzu commented Jul 13, 2023

Description

⚠️ I believe this is debatable if this "bug" is to be handled in the provider, but please bear with me... 🤗

  1. Let's consider this exact OneFlow template -> https://marketplace.opennebula.io/appliance/7c82d610-73f1-47d1-a85a-d799e00c631e
  "roles": [
    {
      "name": "vnf",
      "cardinality": 1,
      "min_vms": 1,
      "vm_template_contents": "...",
      "cooldown": 120,
      "elasticity_policies": [],
      "scheduled_policies": []
    },
  1. Running terraform apply twice causes:
  # opennebula_service_template.oneke must be replaced
-/+ resource "opennebula_service_template" "oneke" {
      ~ gid         = 0 -> (known after apply)
      ~ id          = "2" -> (known after apply)
        name        = "oneke"
      ~ template    = jsonencode(
          ~ {
              ~ TEMPLATE = {
                  ~ BODY = {
                      + description       = ""
                        name              = "OneKE 1.27"
                      ~ roles             = [
                          ~ {
                              + elasticity_policies  = []
                                name                 = "vnf"
                              + scheduled_policies   = []
                              - vm_template          = 0
                                # (4 unchanged attributes hidden)
                            },
                          ~ {
                              + elasticity_policies  = []
                                name                 = "master"
                              + scheduled_policies   = []
                              - vm_template          = 0
                                # (5 unchanged attributes hidden)
                            },
                          ~ {
                              + elasticity_policies  = []
                                name                 = "worker"
                              + scheduled_policies   = []
                              - vm_template          = 0
                                # (4 unchanged attributes hidden)
                            },
                          ~ {
                              + elasticity_policies  = []
                                name                 = "storage"
                              + scheduled_policies   = []
                              - vm_template          = 0
                                # (5 unchanged attributes hidden)
                            },
                        ]
                        # (4 unchanged attributes hidden)
                    }
                }
            } # forces replacement
        )
      ~ uid         = 0 -> (known after apply)
        # (3 unchanged attributes hidden)
    }

Plan: 1 to add, 0 to change, 1 to destroy.
  1. The question would be WDYT we should do about it? Can we handle this in some kind of a generic way in the provider? 🤔

Terraform and Provider version

Terraform v1.5.3
on linux_amd64
+ provider registry.terraform.io/opennebula/opennebula v1.2.2

Affected resources and data sources

  • opennebula_service_template

Terraform configuration

resource "opennebula_service_template" "oneke" {
  name        = "oneke"
  permissions = 642
  uname       = "oneadmin"
  gname       = "oneadmin"
  template = jsonencode({
    "TEMPLATE" = {
      "BODY" = {
        "name"        = "OneKE 1.27"
        "deployment"  = "straight"
        "description" = ""
        "roles" = [
          {
            "name"                 = "vnf"
            "cardinality"          = 1
            "min_vms"              = 1
            "vm_template_contents" = "NIC=[NAME=\"NIC0\",NETWORK_ID=\"$Public\"]\nNIC=[NAME=\"NIC1\",NETWORK_ID=\"$Private\"]\nONEAPP_VROUTER_ETH0_VIP0=\"$ONEAPP_VROUTER_ETH0_VIP0\"\nONEAPP_VROUTER_ETH1_VIP0=\"$ONEAPP_VROUTER_ETH1_VIP0\"\nONEAPP_VNF_NAT4_ENABLED=\"$ONEAPP_VNF_NAT4_ENABLED\"\nONEAPP_VNF_NAT4_INTERFACES_OUT=\"$ONEAPP_VNF_NAT4_INTERFACES_OUT\"\nONEAPP_VNF_ROUTER4_ENABLED=\"$ONEAPP_VNF_ROUTER4_ENABLED\"\nONEAPP_VNF_ROUTER4_INTERFACES=\"$ONEAPP_VNF_ROUTER4_INTERFACES\"\nONEAPP_VNF_HAPROXY_INTERFACES=\"$ONEAPP_VNF_HAPROXY_INTERFACES\"\nONEAPP_VNF_HAPROXY_REFRESH_RATE=\"$ONEAPP_VNF_HAPROXY_REFRESH_RATE\"\nONEAPP_VNF_HAPROXY_CONFIG=\"$ONEAPP_VNF_HAPROXY_CONFIG\"\nONEAPP_VNF_HAPROXY_LB0_IP=\"$ONEAPP_VROUTER_ETH0_VIP0\"\nONEAPP_VNF_HAPROXY_LB0_PORT=\"9345\"\nONEAPP_VNF_HAPROXY_LB1_IP=\"$ONEAPP_VROUTER_ETH0_VIP0\"\nONEAPP_VNF_HAPROXY_LB1_PORT=\"6443\"\nONEAPP_VNF_HAPROXY_LB2_IP=\"$ONEAPP_VROUTER_ETH0_VIP0\"\nONEAPP_VNF_HAPROXY_LB2_PORT=\"$ONEAPP_VNF_HAPROXY_LB2_PORT\"\nONEAPP_VNF_HAPROXY_LB3_IP=\"$ONEAPP_VROUTER_ETH0_VIP0\"\nONEAPP_VNF_HAPROXY_LB3_PORT=\"$ONEAPP_VNF_HAPROXY_LB3_PORT\"\nONEAPP_VNF_KEEPALIVED_VRID=\"$ONEAPP_VNF_KEEPALIVED_VRID\"\n"
            "cooldown"             = 120,
            "elasticity_policies"  = []
            "scheduled_policies"   = []
          },
          {
            "name" = "master"
            "parents" = [
              "vnf"
            ],
            "cardinality"          = 1
            "min_vms"              = 1
            "vm_template_contents" = "NIC=[NAME=\"NIC0\",NETWORK_ID=\"$Private\"]\nONEAPP_VROUTER_ETH0_VIP0=\"$ONEAPP_VROUTER_ETH0_VIP0\"\nONEAPP_VROUTER_ETH1_VIP0=\"$ONEAPP_VROUTER_ETH1_VIP0\"\nONEAPP_K8S_EXTRA_SANS=\"$ONEAPP_K8S_EXTRA_SANS\"\nONEAPP_K8S_LOADBALANCER_RANGE=\"$ONEAPP_K8S_LOADBALANCER_RANGE\"\nONEAPP_K8S_LOADBALANCER_CONFIG=\"$ONEAPP_K8S_LOADBALANCER_CONFIG\"\n"
            "cooldown"             = 120
            "elasticity_policies"  = []
            "scheduled_policies"   = []
          },
          {
            "name" = "worker"
            "parents" : [
              "vnf"
            ]
            "cardinality"          = 1
            "vm_template_contents" = "NIC=[NAME=\"NIC0\",NETWORK_ID=\"$Private\"]\nONEAPP_VROUTER_ETH0_VIP0=\"$ONEAPP_VROUTER_ETH0_VIP0\"\nONEAPP_VROUTER_ETH1_VIP0=\"$ONEAPP_VROUTER_ETH1_VIP0\"\nONEAPP_VNF_HAPROXY_LB2_IP=\"$ONEAPP_VROUTER_ETH0_VIP0\"\nONEAPP_VNF_HAPROXY_LB2_PORT=\"$ONEAPP_VNF_HAPROXY_LB2_PORT\"\nONEAPP_VNF_HAPROXY_LB3_IP=\"$ONEAPP_VROUTER_ETH0_VIP0\"\nONEAPP_VNF_HAPROXY_LB3_PORT=\"$ONEAPP_VNF_HAPROXY_LB3_PORT\"\n"
            "cooldown"             = 120
            "elasticity_policies"  = []
            "scheduled_policies"   = []
          },
          {
            "name" = "storage"
            "parents" : [
              "vnf"
            ]
            "cardinality"          = 1
            "min_vms"              = 1
            "vm_template_contents" = "NIC=[NAME=\"NIC0\",NETWORK_ID=\"$Private\"]\nONEAPP_VROUTER_ETH0_VIP0=\"$ONEAPP_VROUTER_ETH0_VIP0\"\nONEAPP_VROUTER_ETH1_VIP0=\"$ONEAPP_VROUTER_ETH1_VIP0\"\nONEAPP_STORAGE_DEVICE=\"$ONEAPP_STORAGE_DEVICE\"\nONEAPP_STORAGE_FILESYSTEM=\"$ONEAPP_STORAGE_FILESYSTEM\"\n"
            "cooldown"             = 120
            "elasticity_policies"  = []
            "scheduled_policies"   = []
          },
        ]
        "networks" = {
          "Public"  = "M|network|Public||id:"
          "Private" = "M|network|Private||id:"
        }
        "custom_attrs" = {
          "ONEAPP_VROUTER_ETH0_VIP0"        = "M|text|Control Plane Endpoint VIP (IPv4)||"
          "ONEAPP_VROUTER_ETH1_VIP0"        = "O|text|Default Gateway VIP (IPv4)||"
          "ONEAPP_K8S_EXTRA_SANS"           = "O|text|ApiServer extra certificate SANs||localhost,127.0.0.1"
          "ONEAPP_K8S_LOADBALANCER_RANGE"   = "O|text|MetalLB IP range (default none)||"
          "ONEAPP_K8S_LOADBALANCER_CONFIG"  = "O|text64|MetalLB custom config (default none)||"
          "ONEAPP_STORAGE_DEVICE"           = "M|text|Storage device path||/dev/vdb"
          "ONEAPP_STORAGE_FILESYSTEM"       = "O|text|Storage device filesystem||xfs"
          "ONEAPP_VNF_NAT4_ENABLED"         = "O|boolean|Enable NAT||YES"
          "ONEAPP_VNF_NAT4_INTERFACES_OUT"  = "O|text|NAT - Outgoing Interfaces||eth0"
          "ONEAPP_VNF_ROUTER4_ENABLED"      = "O|boolean|Enable Router||YES"
          "ONEAPP_VNF_ROUTER4_INTERFACES"   = "O|text|Router - Interfaces||eth0,eth1"
          "ONEAPP_VNF_HAPROXY_INTERFACES"   = "O|text|Interfaces to run Haproxy on||eth0"
          "ONEAPP_VNF_HAPROXY_REFRESH_RATE" = "O|number|Haproxy refresh rate||30"
          "ONEAPP_VNF_HAPROXY_CONFIG"       = "O|text|Custom Haproxy config (default none)||"
          "ONEAPP_VNF_HAPROXY_LB2_PORT"     = "O|number|HTTPS ingress port||443"
          "ONEAPP_VNF_HAPROXY_LB3_PORT"     = "O|number|HTTP ingress port||80"
          "ONEAPP_VNF_KEEPALIVED_VRID"      = "O|number|Global vrouter id (1-255)||1"
        }
        "ready_status_gate" = true
      }
    }
  })
}

Expected behavior

The second terraform apply run should be idempotent and the resource should not be recreated (it should be at least in-place modified).

Actual behavior

The second terraform apply run recreates the resource despite there is no actual difference in the config.

Steps to Reproduce

$ terraform apply

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # opennebula_service_template.oneke will be created
  + resource "opennebula_service_template" "oneke" {
      + gid         = (known after apply)
      + gname       = "oneadmin"
      + id          = (known after apply)
      + name        = "oneke"
      + permissions = "642"
      + template    = jsonencode(
            {
              + TEMPLATE = {
                  + BODY = {
                      + custom_attrs      = {
                          + ONEAPP_K8S_EXTRA_SANS           = "O|text|ApiServer extra certificate SANs||localhost,127.0.0.1"
                          + ONEAPP_K8S_LOADBALANCER_CONFIG  = "O|text64|MetalLB custom config (default none)||"
                          + ONEAPP_K8S_LOADBALANCER_RANGE   = "O|text|MetalLB IP range (default none)||"
                          + ONEAPP_STORAGE_DEVICE           = "M|text|Storage device path||/dev/vdb"
                          + ONEAPP_STORAGE_FILESYSTEM       = "O|text|Storage device filesystem||xfs"
                          + ONEAPP_VNF_HAPROXY_CONFIG       = "O|text|Custom Haproxy config (default none)||"
                          + ONEAPP_VNF_HAPROXY_INTERFACES   = "O|text|Interfaces to run Haproxy on||eth0"
                          + ONEAPP_VNF_HAPROXY_LB2_PORT     = "O|number|HTTPS ingress port||443"
                          + ONEAPP_VNF_HAPROXY_LB3_PORT     = "O|number|HTTP ingress port||80"
                          + ONEAPP_VNF_HAPROXY_REFRESH_RATE = "O|number|Haproxy refresh rate||30"
                          + ONEAPP_VNF_KEEPALIVED_VRID      = "O|number|Global vrouter id (1-255)||1"
                          + ONEAPP_VNF_NAT4_ENABLED         = "O|boolean|Enable NAT||YES"
                          + ONEAPP_VNF_NAT4_INTERFACES_OUT  = "O|text|NAT - Outgoing Interfaces||eth0"
                          + ONEAPP_VNF_ROUTER4_ENABLED      = "O|boolean|Enable Router||YES"
                          + ONEAPP_VNF_ROUTER4_INTERFACES   = "O|text|Router - Interfaces||eth0,eth1"
                          + ONEAPP_VROUTER_ETH0_VIP0        = "M|text|Control Plane Endpoint VIP (IPv4)||"
                          + ONEAPP_VROUTER_ETH1_VIP0        = "O|text|Default Gateway VIP (IPv4)||"
                        }
                      + deployment        = "straight"
                      + description       = ""
                      + name              = "OneKE 1.27"
                      + networks          = {
                          + Private = "M|network|Private||id:"
                          + Public  = "M|network|Public||id:"
                        }
                      + ready_status_gate = true
                      + roles             = [
                          + {
                              + cardinality          = 1
                              + cooldown             = 120
                              + elasticity_policies  = []
                              + min_vms              = 1
                              + name                 = "vnf"
                              + scheduled_policies   = []
                              + vm_template_contents = <<-EOT
                                    NIC=[NAME="NIC0",NETWORK_ID="$Public"]
                                    NIC=[NAME="NIC1",NETWORK_ID="$Private"]
                                    ONEAPP_VROUTER_ETH0_VIP0="$ONEAPP_VROUTER_ETH0_VIP0"
                                    ONEAPP_VROUTER_ETH1_VIP0="$ONEAPP_VROUTER_ETH1_VIP0"
                                    ONEAPP_VNF_NAT4_ENABLED="$ONEAPP_VNF_NAT4_ENABLED"
                                    ONEAPP_VNF_NAT4_INTERFACES_OUT="$ONEAPP_VNF_NAT4_INTERFACES_OUT"
                                    ONEAPP_VNF_ROUTER4_ENABLED="$ONEAPP_VNF_ROUTER4_ENABLED"
                                    ONEAPP_VNF_ROUTER4_INTERFACES="$ONEAPP_VNF_ROUTER4_INTERFACES"
                                    ONEAPP_VNF_HAPROXY_INTERFACES="$ONEAPP_VNF_HAPROXY_INTERFACES"
                                    ONEAPP_VNF_HAPROXY_REFRESH_RATE="$ONEAPP_VNF_HAPROXY_REFRESH_RATE"
                                    ONEAPP_VNF_HAPROXY_CONFIG="$ONEAPP_VNF_HAPROXY_CONFIG"
                                    ONEAPP_VNF_HAPROXY_LB0_IP="$ONEAPP_VROUTER_ETH0_VIP0"
                                    ONEAPP_VNF_HAPROXY_LB0_PORT="9345"
                                    ONEAPP_VNF_HAPROXY_LB1_IP="$ONEAPP_VROUTER_ETH0_VIP0"
                                    ONEAPP_VNF_HAPROXY_LB1_PORT="6443"
                                    ONEAPP_VNF_HAPROXY_LB2_IP="$ONEAPP_VROUTER_ETH0_VIP0"
                                    ONEAPP_VNF_HAPROXY_LB2_PORT="$ONEAPP_VNF_HAPROXY_LB2_PORT"
                                    ONEAPP_VNF_HAPROXY_LB3_IP="$ONEAPP_VROUTER_ETH0_VIP0"
                                    ONEAPP_VNF_HAPROXY_LB3_PORT="$ONEAPP_VNF_HAPROXY_LB3_PORT"
                                    ONEAPP_VNF_KEEPALIVED_VRID="$ONEAPP_VNF_KEEPALIVED_VRID"
                                EOT
                            },
                          + {
                              + cardinality          = 1
                              + cooldown             = 120
                              + elasticity_policies  = []
                              + min_vms              = 1
                              + name                 = "master"
                              + parents              = [
                                  + "vnf",
                                ]
                              + scheduled_policies   = []
                              + vm_template_contents = <<-EOT
                                    NIC=[NAME="NIC0",NETWORK_ID="$Private"]
                                    ONEAPP_VROUTER_ETH0_VIP0="$ONEAPP_VROUTER_ETH0_VIP0"
                                    ONEAPP_VROUTER_ETH1_VIP0="$ONEAPP_VROUTER_ETH1_VIP0"
                                    ONEAPP_K8S_EXTRA_SANS="$ONEAPP_K8S_EXTRA_SANS"
                                    ONEAPP_K8S_LOADBALANCER_RANGE="$ONEAPP_K8S_LOADBALANCER_RANGE"
                                    ONEAPP_K8S_LOADBALANCER_CONFIG="$ONEAPP_K8S_LOADBALANCER_CONFIG"
                                EOT
                            },
                          + {
                              + cardinality          = 1
                              + cooldown             = 120
                              + elasticity_policies  = []
                              + name                 = "worker"
                              + parents              = [
                                  + "vnf",
                                ]
                              + scheduled_policies   = []
                              + vm_template_contents = <<-EOT
                                    NIC=[NAME="NIC0",NETWORK_ID="$Private"]
                                    ONEAPP_VROUTER_ETH0_VIP0="$ONEAPP_VROUTER_ETH0_VIP0"
                                    ONEAPP_VROUTER_ETH1_VIP0="$ONEAPP_VROUTER_ETH1_VIP0"
                                    ONEAPP_VNF_HAPROXY_LB2_IP="$ONEAPP_VROUTER_ETH0_VIP0"
                                    ONEAPP_VNF_HAPROXY_LB2_PORT="$ONEAPP_VNF_HAPROXY_LB2_PORT"
                                    ONEAPP_VNF_HAPROXY_LB3_IP="$ONEAPP_VROUTER_ETH0_VIP0"
                                    ONEAPP_VNF_HAPROXY_LB3_PORT="$ONEAPP_VNF_HAPROXY_LB3_PORT"
                                EOT
                            },
                          + {
                              + cardinality          = 1
                              + cooldown             = 120
                              + elasticity_policies  = []
                              + min_vms              = 1
                              + name                 = "storage"
                              + parents              = [
                                  + "vnf",
                                ]
                              + scheduled_policies   = []
                              + vm_template_contents = <<-EOT
                                    NIC=[NAME="NIC0",NETWORK_ID="$Private"]
                                    ONEAPP_VROUTER_ETH0_VIP0="$ONEAPP_VROUTER_ETH0_VIP0"
                                    ONEAPP_VROUTER_ETH1_VIP0="$ONEAPP_VROUTER_ETH1_VIP0"
                                    ONEAPP_STORAGE_DEVICE="$ONEAPP_STORAGE_DEVICE"
                                    ONEAPP_STORAGE_FILESYSTEM="$ONEAPP_STORAGE_FILESYSTEM"
                                EOT
                            },
                        ]
                    }
                }
            }
        )
      + uid         = (known after apply)
      + uname       = "oneadmin"
    }

Plan: 1 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

opennebula_service_template.oneke: Creating...
opennebula_service_template.oneke: Creation complete after 1s [id=2]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

$ terraform apply
opennebula_service_template.oneke: Refreshing state... [id=2]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
-/+ destroy and then create replacement

Terraform will perform the following actions:

  # opennebula_service_template.oneke must be replaced
-/+ resource "opennebula_service_template" "oneke" {
      ~ gid         = 0 -> (known after apply)
      ~ id          = "2" -> (known after apply)
        name        = "oneke"
      ~ template    = jsonencode(
          ~ {
              ~ TEMPLATE = {
                  ~ BODY = {
                      + description       = ""
                        name              = "OneKE 1.27"
                      ~ roles             = [
                          ~ {
                              + elasticity_policies  = []
                                name                 = "vnf"
                              + scheduled_policies   = []
                              - vm_template          = 0
                                # (4 unchanged attributes hidden)
                            },
                          ~ {
                              + elasticity_policies  = []
                                name                 = "master"
                              + scheduled_policies   = []
                              - vm_template          = 0
                                # (5 unchanged attributes hidden)
                            },
                          ~ {
                              + elasticity_policies  = []
                                name                 = "worker"
                              + scheduled_policies   = []
                              - vm_template          = 0
                                # (4 unchanged attributes hidden)
                            },
                          ~ {
                              + elasticity_policies  = []
                                name                 = "storage"
                              + scheduled_policies   = []
                              - vm_template          = 0
                                # (5 unchanged attributes hidden)
                            },
                        ]
                        # (4 unchanged attributes hidden)
                    }
                }
            } # forces replacement
        )
      ~ uid         = 0 -> (known after apply)
        # (3 unchanged attributes hidden)
    }

Plan: 1 to add, 0 to change, 1 to destroy.

Debug output

N/A

Panic output

N/A

Important factoids

https://pls.watch/#v=https://i.imgur.com/LXzo2h8.mp4&t=5s;8s 🤔

References

https://docs.opennebula.io/6.6/integration_and_development/system_interfaces/appflow_api.html#service-schema

@sk4zuzu
Copy link
Contributor Author

sk4zuzu commented Jul 13, 2023

To clarify -> if you add the missing vm_template this does not change the outcome.. ☝️ 😌

@treywelsh
Copy link
Collaborator

treywelsh commented Jul 17, 2023

Hi,

Thanks for reporting this, I'll try to give a lot of informations as a starting point to work on this problem, feel free to discuss if you have some better ideas or if I made a mistake somewhere :)

A bit of context:
Until now I didn't worked on service and service templates resources because these resources were added in goca and then in the terraform provider by OpenNebula team members, and these resources don't work like the other ones (other resources are fully manage via XML-RPC protocol, there's no REST API).
I may be the reviewer of the PR when this was submitted but overall I didn't know enough about oneflow and there was no bugs submitted by users until now.

Some more details on what's happening in the create and read step of the provider for the service_template resource:

I'm able to reproduce the problem of this issue applying this:

resource "opennebula_service_template" "service_template" {
  name        = "test-svc"
  permissions = "760"
  template    = <<EOF
{
    "TEMPLATE": {
        "BODY": {
            "name": "test-svc",
            "deployment": "straight",
            "description": "",
             "roles": [
             {
               "name": "vnf",
               "cardinality": 1,
               "min_vms": 1,
               "vm_template_contents": "...",
               "cooldown": 120,
               "elasticity_policies": [],
               "scheduled_policies": []
             }
             ]
        }    
    }
}
EOF
}

The diffs appears at the next plan.

In the provider the content of the template field is unmarshalled in this Goca structure:
https://github.com/OpenNebula/one/blob/master/src/oca/go/src/goca/schemas/service_template/service_template.go#L30

Then the service is created from this structure:
https://github.com/OpenNebula/terraform-provider-opennebula/blob/master/opennebula/resource_opennebula_service_template.go#L130

As a side note, I don't get why this code is here: https://github.com/OpenNebula/one/blob/master/src/oca/go/src/goca/service_template.go#L117
nil is returned just after, sounds like dead code

Now let's look at the Goca Create method for the service template resource:
https://github.com/OpenNebula/one/blob/master/src/oca/go/src/goca/service_template.go#L97

It's splitted across Goca and the provider but the json content is unmarshalled, then marshalled and again unmarshalled.
Doing this suit of marshall/unmarshal allow to check the json, remove empty fields (via omitempty annotation on structs in goca) etc.
Then I added a line here to get some additional logs from the provider (I did the changes on my computer only): https://github.com/OpenNebula/one/blob/master/src/oca/go/src/goca/service_template.go#L106

Here the log I got:

map[string]interface {}{
    "name":       "test-svc",
    "deployment": "straight",
    "roles":      []interface {}{
        map[string]interface {}{
            "min_vms":              float64(1),
            "cooldown":             float64(120),
            "name":                 "vnf",
            "cardinality":          float64(1),
            "vm_template":          float64(0),
            "vm_template_contents": "...",
        },
    },
}

Now we need let's look at the reading step so I added a logging line here in the provider https://github.com/OpenNebula/terraform-provider-opennebula/blob/master/opennebula/resource_opennebula_service_template.go#L265

Again it the code there's some marhal/unmarshaling.
Here is the resulting log line:

{"BODY":{"name":"test-svc","deployment":"straight","roles":[{"name":"vnf","cardinality":1,"vm_template":0,"vm_template_contents":"...","min_vms":1,"cooldown":120}]}}

All the diffs shown by terraform seems to be on absent fields, or fields with their "empty" value (0 for an integer etc.)
Let's consider the description field: this sounds like terraform see it described in the string inside of the template field, but when reading from OpenNebula the template part, the empty description is not described.
It's expected, the Goca structure have some omitempty, so when marshalling all empty values are removed.
For vm_template it's different because this field has no omitempty field, so even when 0 is

Not sure on how we should fix this, some quick ideas to test:

  • we could try to add some ValidateFunc and DiffSuppressFunc functions, to the template field of the service_template resource.
  • we could modify goca structs field to help JSON marshaling process, omitempty is not enough fined grained (it remove an int if it's value is 0). For instance, we could replace an int field by an *int field to distinguish when a field has it's "empty" value and when it's just not present

But if possible, from terraform point of view it may be better to do some refactoring (in the provider and probably in Goca) to work with service_template and service resources in the same way that other resources.
This would allow to share a bunch of code between service and service_template resources in the TF provider like it's already done between template and virtual_machine resources. This would add a lot of new fields to the service_template resource and this would allow to replace the template text field.

Would this last solution be possible ? (I'm asking because in https://marketplace.opennebula.io/appliance/7c82d610-73f1-47d1-a85a-d799e00c631e I already see the json to pass in template)

@frousselet frousselet added this to the 1.4.0 milestone Aug 15, 2023
vickmp added a commit that referenced this issue Dec 1, 2023
@vickmp vickmp linked a pull request Dec 1, 2023 that will close this issue
7 tasks
frousselet pushed a commit that referenced this issue Dec 11, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants