-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The provider doesn't seem to understand that volumes inherit from volume.XYZ
keys on the pool
#43
Comments
This key could be probably added here: https://github.com/lxc/terraform-provider-incus/blob/main/internal/storage/resource_storage_pool.go#L337-L372 |
Isn't that the list of computed pool keys as opposed to inherited keys on the volumes and buckets? |
In other words, you first created the pool without Terraform/OpenTofu and then imported it with Terraform/OpenTofu? How can I reproduce your setup to get the same issue? |
And then have Terraform create a new storage volume using that |
Does this same behavior happen if the pool is created with a tf resource? |
Good question, though not really relevant as the case I was dealing with was one where Incus is acting as a cloud, so the TF user doesn't have any control on the storage pools, they get an empty project and can only create project-tied resources. |
So yeah, even if created by TF, it still happens. |
Yeah I suspected it was the case either way. Inherited resources should be ignored completely since they technically come from another resource. Can they be excluded or identified in the api?
If we ever need to represent unmanaged resources, I think the idiomatic solution is to model Data Sources that can still be represented on the TF side. |
The provider can do GetStoragePool then look at the Config map for a anything that's volume.XYZ then for any such key, XYZ should be treated as inherited in the volume or bucket if its value matches that of the pool volume.XYZ |
@stgraber Regarding the storage pool volume, this PR fixes the problem that the volume takes into account the inherited configuration value: #66 Regarding the creation of storage pools, however, I am not sure whether a correction is really necessary here: Test Case
$ incus storage create test zfs volume.zfs.remove_snapshots=true
resource "incus_storage_pool" "test" {
name = "test"
project = "default"
driver = "zfs"
} $ terraform import incus_storage_pool.test default/test
incus_storage_pool.test: Importing from ID "default/test"...
incus_storage_pool.test: Import prepared!
Prepared incus_storage_pool for import
incus_storage_pool.test: Refreshing state... [name=test]
Import successful!
The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.
$ terraform plan
incus_storage_pool.test: Refreshing state... [name=test]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the
following symbols:
~ update in-place
Terraform will perform the following actions:
# incus_storage_pool.test will be updated in-place
~ resource "incus_storage_pool" "test" {
~ config = {
- "volume.zfs.remove_snapshots" = "true" -> null
}
name = "test"
# (3 unchanged attributes hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy. So when we created the pool, the configuration was imported correctly, as we can see in the status file: $ cat terraform.tfstate
{
"version": 4,
"terraform_version": "1.8.3",
"serial": 62,
"lineage": "863f8d8e-b4f8-b4cc-3bbe-5a247ae7e02e",
"outputs": {},
"resources": [
{
"mode": "managed",
"type": "incus_storage_pool",
"name": "test",
"provider": "provider[\"registry.terraform.io/lxc/incus\"]",
"instances": [
{
"schema_version": 0,
"attributes": {
"config": {
"volume.zfs.remove_snapshots": "true"
},
"description": "",
"driver": "zfs",
"name": "test",
"project": "default",
"remote": null,
"target": null
},
"sensitive_attributes": []
}
]
}
],
"check_results": null
} But when we run resource "incus_storage_pool" "test" {
name = "test"
project = "default"
driver = "zfs"
config = {
"volume.zfs.remove_snapshots" = "true"
}
} $ terraform plan
incus_storage_pool.test: Refreshing state... [name=test]
No changes. Your infrastructure matches the configuration.
Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are
needed. So we could argue that this behavior is correct if you want to use the resource |
The behavior for the storage pool is fine, the problem is really with storage pool keys propagating to volumes. I don't think just piling up "computed" keys is correct here. One can create a volume and specifically set say This is true of ALL volume keys, possibly including arbitrary user ones (not tested), so a storage pool having So if going with marking keys computed, every single storage volume config key would need to be treated as computed which doesn't seem right. |
Your example with |
Bad example, but we have 20+ other keys which aren't covered by the hardcoded list, as mentioned it's effectively every single volume config key, which is why treating them all as computed kinda feels wrong. |
Just looking at one storage driver (ZFS):
|
I see and funnily we have implemented many of those keys example already in the computed keys for the storage pool based on the driver:
So if I understand you correctly then you want to avoid that we hard code these keys but rather make a lookup first to Incus to determine which keys are already set and ignore them? Do you don’t see an issue that we might end up in a state situation where actually the storage volume to resource cannot be trusted anymore because we accept what ever we get from Incus, instead of “forcing” the user to be very explicit about its intentions? |
I don't know how flexible the terraform plugin logic is for that, but ideally, we'd make it ignore An example would be:
|
I've noticed OpenTofu getting pretty confused about a
volume.zfs.remove_snapshots=true
config key appearing on a volume that's defined in terraform. That's because the config key was set on the parent storage pool and so propagated to the new volume.That's normal and we definitely don't want terraform to try to rectify this by deleting and re-creating the volume as it attempted here :)
The text was updated successfully, but these errors were encountered: