-
Notifications
You must be signed in to change notification settings - Fork 9.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposal: Versioned resources #8765
Comments
Thanks for starting this discussion, @radeksimko! I have a question about your initial example, which I'll quote some parts of and then comment inline:
If I understood the rest of the proposal correctly, I think your intent is that at this point we've recorded in the state that version 3 is "current", but we've not actually changed anything in the real system.
In which case, what exactly is Normally we expect It feels to me like One way we could conceptualize "rolling back" is as a funny sort of plan. Normally a plan produces a diff from the state to the config, and "rolling back" could be thought of as a diff from the current state to some "synthetic configuration" we obtain by reading the old version from the real system, in situations where the backend itself has a versioning concept. So building on the idea of
Here's what I assumed
Now when we later I think we need to think carefully about what happens to dependent resources in this situation. If we had another resource consuming the |
This obviously depends on the implementation of Whether we should call
It would not be undone, in fact if ID is taken is real unique ID (version-unique) - which in most cases is - then refresh would just pull a different version of the resource, if implemented well.
By the time we get to plan generation, we'll already have the new versioned resource pulled in place (either in memory in case of
Yes, at the moment dependent resources are planned for potential update only if There's another good reason why expand the number of cases where dependent resources should be scheduled for update, specifically #4846 So basically |
btw. yes - the above mechanism is kinda broken under |
Ahh, I see where I misunderstood. You imagine the resource id as identifying the specific version, where I was expecting it works be identifying the versioned object itself, regardless of current version. Seems like we need both ids in any case, since the version-agnostic id will be needed to enumerate the available versions. I am still a little confused about the interaction between set-version and plan; how does the subsequent plan know that you want to make the world match the old version, as opposed to matching what is in the config as it would do by default? |
Agreed.
The config (in HCL) is expected to be version agnostic, so there's no conflict there (I think). The only difference would appear to be in 1 or more computed fields - e.g. Something I didn't realise in the initial proposal and what you were probably asking about: We would basically suppress the diff of the versioned resource itself. It may sound like a bad idea, but after all we're not changing anything, we're just referencing a different resource which already exists, so I think it's fine. All you would see in the |
Actually I think I now understand what do you mean. That ^ is not true. 😞 |
But I still think we can get around that, if we flag versioned resources specifically in the tfstate and then have 2 sets of fields - 1st primary (basically matching HCL), 2nd pinned version then we may be able to compare things accordingly -> we can detect changes if HCL != Primary, but then do the actual comparison with pinned version. After All references would be 1st compared against pinned version if one exists. |
|
Any plan to address this anytime soon? |
New commands
terraform list-versions RESOURCE_TYPE.NAME
terraform set-version RESOURCE_TYPE.NAME [version-id]
Expected use case
Schema changes
I'd expect
set-version
to have a very similar behaviour totaint
- i.e. no API calls, just ID change. The difference would then appear duringrefresh
/plan
/apply
as these would be working with the new version of the resource.Resources that would benefit from this
aws_api_gateway_deployment
aws_ecs_task_definition
aws_lambda_function
w/publish = true
aws_s3_bucket_object
in a bucket that has versioning enabledaws_elastic_beanstalk_application_version
docker_image
fastly_service
(as @apparentlymart mentioned)Breaking changes
The way we think about versioned resources today (in majority of cases) is that a given resource manages the latest version and
refresh
pulls down the latest version if it doesn't match with local config. Some resources destroy old versions (ECS TD), some don't (S3 object, Docker Image, Lambda), some don't create new versions at all (API Gateway Deployment).This was treated as a nice way to detect changes outside of Terraform which we'd now lost.
Impact
destroy
We would only destroy a given version of the resource rather than all versions which could potentially cause dependency issues as we might not be able to destroy some parent resources - e.g.
aws_api_gateway_rest_api
.We could accept this as a feature/default behaviour and make people explicitly do something like
which would under the hood result in something like this:
refresh
We would refresh data about a given version of that resource, rather than pull the latest version, but here's how the user could emulate the latter behaviour:
$ terraform set-version aws_ecs_task_definition.myapp $(terraform list-versions | head -1) $ terraform refresh
apply
Each
apply
would (potentially) generate a new version (new resource) which in turn means it would leave some orphans behind.Some resources may need a new field as the cause of version generation comes from outside of the resource. We already have this in
docker_image
and we'd probably need to add similar one toaws_api_gateway_deployment
too.taint
This would behave as
apply
with changedtrigger
.The text was updated successfully, but these errors were encountered: