New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rerunning terraform - cant update node #336
Comments
Please provide the repro steps including examples of the resources in question. This will help narrow down the exact issue here. You may need some logic for removing the member before the node can be re-created. |
resource "bigip_ltm_node" "node" { resource "bigip_ltm_pool" "pool" { resource "bigip_ltm_pool_attachment" "attach_node" { so in the example above the "var.privateip" is set my a previous module creating a private endpoint in azure. Since this is not known terraform tries to recreate the node in every deploy, and on run 2++ since the node already exists it fails due to being attached to a pool. |
@thesutex, please let me know did i miss anything Resource snippet:
|
Hi @RavinderReddyF5 , as i wrote above the problem is that its the IP address from the Azure Private Link service. module one outputs this:
gets the value to f5 module:
and then creates the node:
Terraform outputs: during plan:
10.40.0.4 is the IP set from first apply, and is still the IP. but it still forces the apply and fails with
If I need some logic to remove this node from pool first before rerunning, how would that work on first run? and how do one do that? |
We ran into this same issue and seem blocked. Our issue is that the node is coming up with same name but new IP address and the attachment does not seem to be deleted first to clear out the existing node. Removing that node or adding a new one are not a issue its only when updating an existing one that forces a recreate of that node. Is it possible to add a recreate for the attachment as well? |
Issue fixed in 1.3.3 release |
@thesutex please use pool attachment resource as outlined in : https://registry.terraform.io/providers/F5Networks/bigip/latest/docs/resources/bigip_ltm_pool_attachment we modified pool attachment resource to remove dependency on ltm_node resource. |
Maybe I am wrong here but I tested and still see this failing. Other than removing the dependency did anything else change with the resource?
Results in this error when changing an IP address of a node The Plan shows its will recreate the node but not the attachment:
And the apply fails with
|
@whume and make sure your node is dis associated from pool
|
So I tried to update to what you suggested.
I added the partition to the node as well as im not working in the common partition. I destroyed the whole vip and tried to recreate from scratch and when I do I get a inconsistent result error
|
OK, So I did figure out what you were saying on this and I am still not sure this is a good solution. This change introduces what I would call a breaking bug in the provider on a minor release version. We ended up pinning a bunch of our deployments to the previous version to get around the issue. Additionally, while this works it creates the node in a way that it is not managed in state and by Terraform. So if you change the IP address it will remove it from the pool and make a new node but the node now exists in F5 and is orphaned. For ephemeral workloads like we are doing that could leaves 100's if not 1000's of stale entries in the F5. This could be mitigated by running cleanup scripts periodically but I think this should be handled in Terraform. The last issue I see is that this change requires you to kill the nodes off out of the pool creating down time. While not a huge deal and can be mitigated there is no clear upgrade path and you cant just run Terraform apply to have it delete the old nodes and replace with the new ones. While I am glad to see a fix come in and appreciate the quick turn around I think this should maybe be reverted and either released in a major version or reworked. Thanks |
Thanks for the feedback @whume , can you elaborate a bit more on the last issue mentioned above. I am not clear how the new work flow introduces this issue. |
The last comment was mostly just the fact that its not a in place change. If you try to replace the node so it uses the new naming convention of the IP:Port on the attachment it tries to create a node with the same IP as the old nodes and errors out with a already in use error. |
It does thanks. What I have seen in a different migration was to remove the NODE resources from the config, delete their state out of the state file, and then reference the IP:PT of the nodes in the attachment resource. The attachment resource will find the existing nodes and use them within the pools. We are actively working on fixing the patch release issue. Thanks, |
@whume what version(s) did you pin to? |
Hi
I am setting up a basic vip-pool-node config using terraform and azure apps where the node IP is fetched from previous terraform code to setup privatelink to azure, this this IP is input to this terraform script (and not known before), this works like a charm the first time but if I rerun the deploy it fails because terraform wants to recreate the node because IP is not known before run. This again fails since recreating the node fails due to being attached to a pool
{"code":400,"message":"01070110:3: Node address '/Web-Applications/web-xyz_node' is referenced by a member of pool '/Web-Applications/web-xyz'.","errorStack":[],"apiError":3}
is there a way to solve this using current module?
as for why rerunning, this code is a part of the application deploy that gets updated regularly
The text was updated successfully, but these errors were encountered: