You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Please vote on this issue by adding a 馃憤 reaction to the original issue to help the community and maintainers prioritize this request
Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
If you are interested in working on this issue or have submitted a pull request, please leave a comment
Description
When applying a new azurerm_resource_group_template_deployment resource, if the deployment fails, for example due to insufficient permissions, a referred resource not existing, or a necessary provider not being enabled (all three issues I had last week) then the id of the deployment is not commited to the terraform state. This has the consequence that a subsequent call to terraform plan identifies the resource for creation, but terraform apply fails with the following error message
Error: A resource with the ID "/subscriptions/<redacted>/resourceGroups/<redacted>/providers/Microsoft.Resources/deployments/eventgrid-deployment" already exists - to be managed via Terraform this resource needs to be imported into the State. Please see the resource documentation for "azurerm_resource_group_template_deployment" for more information.
Importing the resource doesn't really help as the template contents haven't changed so terraform doesn't take any actions to redeploy the template.
The behaviour I think I would prefer here is;
if the apply fails creating a depoyment, it should still be confirmed if the resource was created, adding the id to the state if it could be found.
deployment state should be checked during terraform plan, if the state is "failed" it should trigger a diff that would cause terraform apply to execute redeploy on the deployment.
I'm still fairly fresh in the Azure world, so I'm open for citicism to whether my expected behaviour is reasonable.
I am seeing the same problem (very infrequently).
The underlying cause for me seemed to be a failure on a single inner resource which was left in a 'corrupt' state leaving the deployment marked as a fail.
At present, my only option has been to manually remove the deployment and the resources it did manage to provision and then rerun the TF Plan & Apply steps.
@sharebear@tombuildsstuff think this would be very useful. Having to go manually delete the deployments after every failed template deployment is very frustrating.
Community Note
Description
When applying a new azurerm_resource_group_template_deployment resource, if the deployment fails, for example due to insufficient permissions, a referred resource not existing, or a necessary provider not being enabled (all three issues I had last week) then the id of the deployment is not commited to the terraform state. This has the consequence that a subsequent call to
terraform plan
identifies the resource for creation, butterraform apply
fails with the following error messageImporting the resource doesn't really help as the template contents haven't changed so terraform doesn't take any actions to redeploy the template.
The behaviour I think I would prefer here is;
terraform plan
, if the state is "failed" it should trigger a diff that would causeterraform apply
to execute redeploy on the deployment.I'm still fairly fresh in the Azure world, so I'm open for citicism to whether my expected behaviour is reasonable.
New or Affected Resource(s)
Potential Terraform Configuration
Concrete test case can be produced if desired.
References
The text was updated successfully, but these errors were encountered: