Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add ability to update Bigtable instance cluster node count #4026

Merged
merged 8 commits into from
Aug 19, 2019

Conversation

mackenziestarr
Copy link
Contributor

@mackenziestarr mackenziestarr commented Jul 15, 2019

This PR allows the Bigtable instance cluster num_nodes to updated without forcing the instance to be re-created. I also tried to support upgrading from a DEVELOPMENT to a PRODUCTION instance but the bigtable package only exposes that information at instance creation time. Switching to the v2 admin API would allow this via Instance.GetType but for now I've left the instance_type schema as ForceNew. All tests under make testacc TEST=./google TESTARGS='-run=TestAccBigtableInstance*' pass for me.

fixes #2521

@ghost ghost added the size/m label Jul 15, 2019
@rileykarson rileykarson self-requested a review July 15, 2019 15:51
@rileykarson
Copy link
Collaborator

Hey @mackenziestarr! Thanks for your contribution. Unfortunately, I believe that changing the type of cluster from TypeSet to TypeList will count as a breaking change (as order will suddenly matter, resulting in diffs for some users), blocking us from applying this change until the 3.0.0 release of the provider.

Assuming the functionality discussed at the end in #2521 is exposed in the client libraries, we'll be able to add this in a future 2.X series release. If that's the case, we'll supersede this PR with one exposing it there.

@mackenziestarr
Copy link
Contributor Author

Hi @rileykarson thanks for taking a look. I converted that to TypeList because of the issue with TypeSet elements with ForceNew causing the entire resource to be recreated reference here hashicorp/terraform#15420 (comment)

A change in any property in the set acts as if the entire set has changed (and if any of the properties in the set are ForceNew=true, then the entire resource gets re-created)

I see your point about TypeList not being ideal because it enforces ordering where there shouldn't be any. Is there a way to get around the bug above with TypeSet (possibly using a different structure for the implementation that is transparent to user's existing configs) — it doesn't seem to be a limitation of the Bigtable APIs as described in #2521 but an actual bug / unexpected property of using TypeSet

@rileykarson
Copy link
Collaborator

I'm not aware of a workaround for the issue that wouldn't constitute a breaking change for users (I looked into the same topic a few days ago). I see two possible paths forward:

  • Convert this field to a TypeList in 3.0.0, assuming clusters are ordered when returned from the API. We have a limited ability to suppress diffs when they're returned unordered that we've used before, but it isn't transparent- I believe if any field has changed Terraform returns an unintuitive diff.

  • The other gets a little gritty. Terraform can't tell which set elements have been modified / removed at update time. When an API only allows us to call addX/modifyX/removeX endpoints on subfields (effectively, this is how BT Cluster behaves) instead of supporting PUT or PATCH on the parent, we can't update the subfield. Terraform can tell the final state of the field, so a PUT or PATCH endpoint or a similar function in a client library would both let us add / remove clusters and perform size updates transparently even while we continue to use TypeSet.

@mackenziestarr
Copy link
Contributor Author

@rileykarson just to clarify on the second option, how would that work with fields that can't be changed like cluster_id (and therefore should be marked in the schema with ForceNew)? Would the new API endpoint need to accept that in the PUT / PATCH payload and throw a validation error? Thanks for all the great info so far, this is my first contribution.

@rileykarson
Copy link
Collaborator

Ideally the client lib / API would throw a validation error if we attempted an impossible change, but we can work around it if it doesn't.

This functionality was recently added to the grpc client with https://code-review.googlesource.com/c/gocloud/+/42850. With that implementation, adding/removing clusters isn't possible (which could simplify our client code considerably 🙁 ). Regardless, it means we don't need to own much code in our Update fn and can just send our entire end state to the client lib.

Looking at this again, I think that we can safely make this change with approach #1, provided we can perform diff comparisons in a CustomizeDiff to avoid a breaking change. We've treated a TypeList as unordered in the past in GCE Subnetwork with https://github.com/terraform-providers/terraform-provider-google/blob/master/google/resource_compute_subnetwork.go#L164, if we add a similar custom diff function here we can do the same.

I believe all we need to do is make the same schema changes you've made, and add the CD to make sure that ordering of the list doesn't matter. CustomizeDiff functions can be pretty finicky- let me know if you'd like to take a shot at the implementation. If not, I can take over on implementing it.

@mackenziestarr
Copy link
Contributor Author

hey @rileykarson thanks for all the info I think with the prior art there I can take a shot at implementing a CustomizeDiff function for the change, I'll try to be back with an update soon.

@rileykarson
Copy link
Collaborator

Sounds good! If you're stuck, feel free to reach out here.

@mackenziestarr
Copy link
Contributor Author

Hey @rileykarson when I went to implement this change I noticed this from the ResourceDiff docs

All functions in ResourceDiff, save for ForceNew, can only be used on computed fields.

Given that, should I expect to have to modify the schema similar to this PR?

@rileykarson
Copy link
Collaborator

rileykarson commented Jul 24, 2019

Yep!

Because of that change, in the customizediff we'll need to verify that the new # for cluster is at least 1 to simulate it being Required: true and MinItems: 1. You can see the equivalent value being pulled into newCount in this line: https://github.com/terraform-providers/terraform-provider-google/blob/master/google/resource_compute_subnetwork.go#L169

@ghost ghost added size/l and removed size/m labels Jul 30, 2019
@mackenziestarr
Copy link
Contributor Author

Hey @rileykarson I managed to get a passing test for reordering clusters when making changes but I still need to handle the reordering cases where additional clusters are added / removed. Hopefully I'll have that soon!

Copy link
Collaborator

@rileykarson rileykarson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If adding / removing clusters isn't necessary for your usecase, I'd prefer to omit it for now I think. If you return nil in the cases where count doesn't match at https://github.com/terraform-providers/terraform-provider-google/pull/4026/files#diff-97a6b5054641d12f7b5a4b1aafe531b0R322-R328 I think that's sufficient? Terraform will display a (probably ugly, but workable) diff that causes the instance to recreate.

I also left some surface-level comments on a couple pieces of the customizediff.

google/resource_bigtable_instance.go Show resolved Hide resolved
google/resource_bigtable_instance.go Outdated Show resolved Hide resolved
@@ -12,6 +12,7 @@ func TestAccBigtableInstanceIamBinding(t *testing.T) {
t.Parallel()

instance := "tf-bigtable-iam-" + acctest.RandString(10)
cluster := "c-" + acctest.RandString(10)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch!

@mackenziestarr
Copy link
Contributor Author

@rileykarson refactored the code a bit in 376017d to make it cleaner and fixed up the diff with regard to #4156, thanks for the review!

@mackenziestarr
Copy link
Contributor Author

@rileykarson just bumping this, I think it’s ready to be merged

Copy link
Collaborator

@rileykarson rileykarson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry! The notification that you'd updated the PR got eaten, thanks for pinging me.

Just a couple more small things commented inline.

google/resource_bigtable_instance.go Outdated Show resolved Hide resolved
google/resource_bigtable_instance.go Outdated Show resolved Hide resolved
@mackenziestarr
Copy link
Contributor Author

@rileykarson no worries, that happens to me as well with github notifications sometimes — just updated with some new test cases to cover the failure conditions you pointed out.

Copy link
Collaborator

@rileykarson rileykarson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks for contributing @mackenziestarr!

I'll upstream these changes to Magic Modules so they're applied to the beta provider as well, and then merge this PR.

@rileykarson
Copy link
Collaborator

Adding changelog metadata:

`num_nodes` can now be updated in `google_bigtable_instance`

@rileykarson rileykarson merged commit b753d89 into hashicorp:master Aug 19, 2019
@ghost
Copy link

ghost commented Sep 19, 2019

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks!

@ghost ghost locked and limited conversation to collaborators Sep 19, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add update support for bigtable clusters (such as resizing)
2 participants