Update replace kubeconfig token logic #1158
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Issue: #841
Problem
When using TF output or data resources that changes after the apply, all subsequent runs of
terraform plan
creates a new kubeconfig API token untilterraform apply
is run again to get the output and data changes. This was seen on both Rancher 2.6.9 and 2.6.10.Solution
After investigation, the root cause of this issue is likely that Terraform downloads / generates a kubeconfig on every run of a terraform plan or apply. TF replaces the kubeconfig token every time instead of using the token from the cached kubeconfig, which is causing the over generation of API tokens.
My solution is to update the
getClusterKubeconfig
logic explained here to use the API token from the cached kubeconfig (if it exists) instead of always replacing it.Testing
Engineering Testing
Manual Testing
@jakefhyde I was unable to reproduce this behavior again on 2.6.10 using the customer's lab setup, but was also not seeing the token over generation when testing on my fork.
Test plan
Automated Testing
QA Testing Considerations
Regressions Considerations