Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow in-place update of PVC's storage request #957

Merged
merged 2 commits into from
Aug 17, 2020

Conversation

dak1n1
Copy link
Contributor

@dak1n1 dak1n1 commented Aug 13, 2020

Description

The Kubernetes API allows in-place updates of a PVC's spec.resources.requests.storage field.
This commit updates the PVC resource to allow the same.

A PVC's storage request may increase in-place, but the API does not allow it to decrease in place.
Therefore, the provider will recreate the PVC in the case that requests.storage decreases.
This will shield the user from this specific Kubernetes API error:

spec.resources.requests.storage: field can not be less than previous value

...And it provides an additional convenience by handling the delete/re-create automatically.

Acceptance tests

  • Have you added an acceptance test for the functionality being added?
  • Have you run the acceptance tests on this branch? (If so, please include the test log in a gist)

References

#602

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

The Kubernetes API allows in-place edits of a PVC's spec.resources.requests.storage field.
This change updates the PVC resource to allow the same.

To avoid surfacing this Kubernetes API error to users, the PVC will be recreated if the
specified storage value is less than the previously specified value:

`spec.resources.requests.storage: field can not be less than previous value`

This saves users the trouble of deleting and recreating the PVC themselves.
@ghost ghost added size/M labels Aug 13, 2020
@dak1n1 dak1n1 requested a review from a team August 13, 2020 03:01
@dak1n1
Copy link
Contributor Author

dak1n1 commented Aug 13, 2020

I'm working on making a test for allow_volume_expansion that will test the storage request increase. Offhand, I'm not sure if it's possible to check if a resource has been deleted and re-created, but I'll do some research.

@aareet aareet linked an issue Aug 13, 2020 that may be closed by this pull request
@alexsomesan
Copy link
Member

Nice solution so far! Looks good to me :)

Test the ability to set resources attribute in minikube.
Test the ability to do in-place volume expansion in GKE.
Wait for deletion, since PVC finalizer can sometimes delay deletion.
@ghost ghost added size/L and removed size/M labels Aug 15, 2020
@dak1n1
Copy link
Contributor Author

dak1n1 commented Aug 15, 2020

Thanks! I got some acceptance tests committed today. I tested them locally using minikube and GKE. The tests pass. TC test is running now too.

@dak1n1
Copy link
Contributor Author

dak1n1 commented Aug 15, 2020

TC test passed. This is the result from a local test run:

# output from minikube run

😄  minikube v1.12.1 on Fedora 32
🐳  Preparing Kubernetes v1.18.3 on Docker 19.03.12 ...
🌟  Enabled addons: default-storageclass, storage-provisioner
🏄  Done! kubectl is now configured to use "minikube"

$ TF_ACC=true go test -v -run TestAccKubernetesPersistentVolumeClaim_expansion
=== RUN   TestAccKubernetesPersistentVolumeClaim_expansion
--- PASS: TestAccKubernetesPersistentVolumeClaim_expansion (74.33s)
PASS
ok      github.com/hashicorp/terraform-provider-kubernetes/kubernetes   74.405s
/home/dakini/go/src/github.com/hashicorp/terraform-provider-kubernetes


# output from GKE run

google_container_cluster.primary: Creation complete after 3m47s [id=projects/terraform-strategic-providers/locations/us-west1-a/clusters/tf-acc-test-42668e108886b9ea993c]
data.template_file.kubeconfig: Refreshing state...
local_file.kubeconfig: Creating...
local_file.kubeconfig: Creation complete after 0s [id=f7a77c1e96d9470e25fca44f9d793abd2781ddcb]

Apply complete! Resources: 5 added, 0 changed, 0 destroyed.


$ TF_ACC=true go test -v -run TestAccKubernetesPersistentVolumeClaim_expansion
=== RUN   TestAccKubernetesPersistentVolumeClaim_expansion
--- PASS: TestAccKubernetesPersistentVolumeClaim_expansion (67.18s)
PASS
ok      github.com/hashicorp/terraform-provider-kubernetes/kubernetes   67.267s
/home/dakini/go/src/github.com/hashicorp/terraform-provider-kubernetes

Copy link
Member

@alexsomesan alexsomesan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!
Great work, @dak1n1! Very nicely done

@dak1n1 dak1n1 merged commit 7a5e352 into hashicorp:master Aug 17, 2020
@JordanP
Copy link
Contributor

JordanP commented Aug 31, 2020

@dak1n1 Thank you so much for this ! Quick question (I could try myself but I am not sure how to quickly build the provider from the master branch): would this work for dynamic volume (volume_claim_template) ?

@dak1n1
Copy link
Contributor Author

dak1n1 commented Aug 31, 2020

@JordanP unfortunately volume_claim_template is a part of the StatefulSet spec, which is set to ForceNew https://github.com/hashicorp/terraform-provider-kubernetes/blob/master/kubernetes/resource_kubernetes_stateful_set.go#L37 (which means if any field of Spec is modified, the resource will be replaced rather than modified in place).

@JordanP
Copy link
Contributor

JordanP commented Aug 31, 2020

Ok, that's unfortunate for me :p I guess I can always work on a PR !

@ghost
Copy link

ghost commented Oct 10, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks!

@ghost ghost locked as resolved and limited conversation to collaborators Oct 10, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

PVC increase size causes destroy
3 participants