Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add some unit tests for apply on TPR #40841

Closed
mengqiy opened this issue Feb 2, 2017 · 10 comments
Closed

Add some unit tests for apply on TPR #40841

mengqiy opened this issue Feb 2, 2017 · 10 comments
Labels
area/custom-resources lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. sig/apps Categorizes an issue or PR as relevant to SIG Apps.

Comments

@mengqiy
Copy link
Member

mengqiy commented Feb 2, 2017

Copied from #40666 (comment)

Initial attempt: #40775

We should at least have the following unit tests:

  • create object with unknown group/version (covers TPR creation scenario)
  • changing patch of object with unknown group/version (covers TPR apply scenario)
  • no-op patch (idempotent apply) object with unknown group/version (covers TPR re-apply scenario)
  • changing patch of known object (like deployments) in an unknown version (will need to add the unknown version to the fake discovery doc - see cf74abd#diff-685443e183899c880f3c3150021d7229 as an example) - makes sure kubectl is resilient to future versions of known objects being introduced
@mengqiy
Copy link
Member Author

mengqiy commented Feb 2, 2017

@ahakanbaba This is for you.

@ahakanbaba
Copy link
Contributor

Thanks I will follow up with this.

@mengqiy
Copy link
Member Author

mengqiy commented Feb 3, 2017

@ahakanbaba You should be good to go. #40666 merged.

@ahakanbaba
Copy link
Contributor

Thanks

@ahakanbaba
Copy link
Contributor

@GnawNom This is the issue we can collaborate on.
I will mention this issue in my PR's so you can also keep track.

@ahakanbaba
Copy link
Contributor

@GnawNom I will work on
no-op patch (idempotent apply) object with unknown group/version (covers TPR re-apply scenario)
first.
FYI .

ahakanbaba added a commit to ahakanbaba/kubernetes that referenced this issue Feb 8, 2017
The test in apply_test follows the general pattern of other tests.
We load from a file in test/fixtures and mock the API server in the
function closure in the HttpClient call.
The apply operation expects a last-modified-configuration annotation.
That is written verbatim in the test/fixture file.

References kubernetes#40841
ahakanbaba added a commit to ahakanbaba/kubernetes that referenced this issue Feb 17, 2017
The tests in apply_test follows the general pattern of other tests.
We load from a file in test/fixtures and mock the API server in the
function closure in the HttpClient call.
In PATCH request rount-tripper we check that the kubectl apply
implementation worked as expected.

References kubernetes#40841
ahakanbaba added a commit to ahakanbaba/kubernetes that referenced this issue Feb 24, 2017
The tests in apply_test follows the general pattern of other tests.
We load from a file in test/fixtures and mock the API server in the
function closure in the HttpClient call.
In PATCH request rount-tripper we check that the kubectl apply
implementation worked as expected.

References kubernetes#40841
ahakanbaba added a commit to ahakanbaba/kubernetes that referenced this issue Feb 27, 2017
The tests in apply_test follows the general pattern of other tests.
We load from a file in test/fixtures and mock the API server in the
function closure in the HttpClient call.
In PATCH request rount-tripper we check that the kubectl apply
implementation worked as expected.

References kubernetes#40841
k8s-github-robot pushed a commit that referenced this issue Feb 28, 2017
Automatic merge from submit-queue (batch tested with PRs 41937, 41151, 42092, 40269, 42135)

Add a unit test for idempotent applys to the TPR entries.

The test in apply_test follows the general pattern of other tests.
We load from a file in test/fixtures and mock the API server in the
function closure in the HttpClient call.
The apply operation expects a last-modified-configuration annotation.
That is written verbatim in the test/fixture file.

References #40841



**What this PR does / why we need it**:
Adds one unit test for TPR's using applies. 

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
References: 
kubernetes/enhancements#95
#40841 (comment)


**Special notes for your reviewer**:

I am not super proud of the tpr-entry name. 
But I feel like we need to call the two objects differently. 
The one which has Kind:ThirdPartyResource 
and the one has Kind:Foo. 

Is the name "ThirdPartyResource" used interchangeably for both ? I used tpr-entry for the Kind:Foo object.

Also I !assume! this is testing an idempotent apply because the last-applied-configuration annotation is the same as the object itself. 

This is the state I see in the logs of kubectl if I do a proper idempotent apply of a third party resource entry. 

I guess I will know more once I start playing around with apply command that change TPR objects. 

**Release note**:

```release-note
```
@k8s-github-robot k8s-github-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label May 31, 2017
@0xmichalis
Copy link
Contributor

/sig apps

@k8s-ci-robot k8s-ci-robot added the sig/apps Categorizes an issue or PR as relevant to SIG Apps. label Jun 21, 2017
@0xmichalis
Copy link
Contributor

/sig api-machinery

@k8s-ci-robot k8s-ci-robot added the sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. label Jun 21, 2017
@k8s-github-robot k8s-github-robot removed the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Jun 21, 2017
@enisoc enisoc added this to Backlog in CustomResourceDefinition Jul 13, 2017
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 29, 2017
@liggitt
Copy link
Member

liggitt commented Jan 16, 2018

These exist now:

# Test that we can create single item via apply
kubectl "${kube_flags[@]}" apply -f hack/testdata/CRD/foo.yaml
# Test that we have create a foo named test
kube::test::get_object_assert foos "{{range.items}}{{$id_field}}:{{end}}" 'test:'
# Test that the field has the expected value
kube::test::get_object_assert foos/test '{{.someField}}' 'field1'
# Test that apply an empty patch doesn't change fields
kubectl "${kube_flags[@]}" apply -f hack/testdata/CRD/foo.yaml
# Test that the field has the same value after re-apply
kube::test::get_object_assert foos/test '{{.someField}}' 'field1'
# Test that apply has updated the subfield
kube::test::get_object_assert foos/test '{{.nestedField.someSubfield}}' 'subfield1'
# Update a subfield and then apply the change
kubectl "${kube_flags[@]}" apply -f hack/testdata/CRD/foo-updated-subfield.yaml
# Test that apply has updated the subfield
kube::test::get_object_assert foos/test '{{.nestedField.someSubfield}}' 'modifiedSubfield'
# Test that the field has the expected value
kube::test::get_object_assert foos/test '{{.nestedField.otherSubfield}}' 'subfield2'
# Delete a subfield and then apply the change
kubectl "${kube_flags[@]}" apply -f hack/testdata/CRD/foo-deleted-subfield.yaml
# Test that apply has deleted the field
kube::test::get_object_assert foos/test '{{.nestedField.otherSubfield}}' '<no value>'
# Test that the field does not exist
kube::test::get_object_assert foos/test '{{.nestedField.newSubfield}}' '<no value>'
# Add a field and then apply the change
kubectl "${kube_flags[@]}" apply -f hack/testdata/CRD/foo-added-subfield.yaml
# Test that apply has added the field
kube::test::get_object_assert foos/test '{{.nestedField.newSubfield}}' 'subfield3'
# Delete the resource
kubectl "${kube_flags[@]}" delete -f hack/testdata/CRD/foo.yaml
# Make sure it's gone
kube::test::get_object_assert foos "{{range.items}}{{$id_field}}:{{end}}" ''
# Test that we can create list via apply
kubectl "${kube_flags[@]}" apply -f hack/testdata/CRD/multi-crd-list.yaml
# Test that we have create a foo and a bar from a list
kube::test::get_object_assert foos "{{range.items}}{{$id_field}}:{{end}}" 'test-list:'
kube::test::get_object_assert bars "{{range.items}}{{$id_field}}:{{end}}" 'test-list:'
# Test that the field has the expected value
kube::test::get_object_assert foos/test-list '{{.someField}}' 'field1'
kube::test::get_object_assert bars/test-list '{{.someField}}' 'field1'
# Test that re-apply an list doesn't change anything
kubectl "${kube_flags[@]}" apply -f hack/testdata/CRD/multi-crd-list.yaml
# Test that the field has the same value after re-apply
kube::test::get_object_assert foos/test-list '{{.someField}}' 'field1'
kube::test::get_object_assert bars/test-list '{{.someField}}' 'field1'
# Test that the fields have the expected value
kube::test::get_object_assert foos/test-list '{{.someField}}' 'field1'
kube::test::get_object_assert bars/test-list '{{.someField}}' 'field1'
# Update fields and then apply the change
kubectl "${kube_flags[@]}" apply -f hack/testdata/CRD/multi-crd-list-updated-field.yaml
# Test that apply has updated the fields
kube::test::get_object_assert foos/test-list '{{.someField}}' 'modifiedField'
kube::test::get_object_assert bars/test-list '{{.someField}}' 'modifiedField'
# Test that the field has the expected value
kube::test::get_object_assert foos/test-list '{{.otherField}}' 'field2'
kube::test::get_object_assert bars/test-list '{{.otherField}}' 'field2'
# Delete fields and then apply the change
kubectl "${kube_flags[@]}" apply -f hack/testdata/CRD/multi-crd-list-deleted-field.yaml
# Test that apply has deleted the fields
kube::test::get_object_assert foos/test-list '{{.otherField}}' '<no value>'
kube::test::get_object_assert bars/test-list '{{.otherField}}' '<no value>'
# Test that the fields does not exist
kube::test::get_object_assert foos/test-list '{{.newField}}' '<no value>'
kube::test::get_object_assert bars/test-list '{{.newField}}' '<no value>'
# Add a field and then apply the change
kubectl "${kube_flags[@]}" apply -f hack/testdata/CRD/multi-crd-list-added-field.yaml
# Test that apply has added the field
kube::test::get_object_assert foos/test-list '{{.newField}}' 'field3'
kube::test::get_object_assert bars/test-list '{{.newField}}' 'field3'
# Delete the resource
kubectl "${kube_flags[@]}" delete -f hack/testdata/CRD/multi-crd-list.yaml
# Make sure it's gone
kube::test::get_object_assert foos "{{range.items}}{{$id_field}}:{{end}}" ''
kube::test::get_object_assert bars "{{range.items}}{{$id_field}}:{{end}}" ''
## kubectl apply --prune
# Test that no foo or bar exist
kube::test::get_object_assert foos "{{range.items}}{{$id_field}}:{{end}}" ''
kube::test::get_object_assert bars "{{range.items}}{{$id_field}}:{{end}}" ''
# apply --prune on foo.yaml that has foo/test
kubectl apply --prune -l pruneGroup=true -f hack/testdata/CRD/foo.yaml "${kube_flags[@]}" --prune-whitelist=company.com/v1/Foo --prune-whitelist=company.com/v1/Bar
# check right crds exist
kube::test::get_object_assert foos "{{range.items}}{{$id_field}}:{{end}}" 'test:'
kube::test::get_object_assert bars "{{range.items}}{{$id_field}}:{{end}}" ''
# apply --prune on bar.yaml that has bar/test
kubectl apply --prune -l pruneGroup=true -f hack/testdata/CRD/bar.yaml "${kube_flags[@]}" --prune-whitelist=company.com/v1/Foo --prune-whitelist=company.com/v1/Bar
# check right crds exist
kube::test::get_object_assert foos "{{range.items}}{{$id_field}}:{{end}}" ''
kube::test::get_object_assert bars "{{range.items}}{{$id_field}}:{{end}}" 'test:'

/close

@liggitt liggitt moved this from Backlog to Done in CustomResourceDefinition Jan 16, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/custom-resources lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. sig/apps Categorizes an issue or PR as relevant to SIG Apps.
Projects
No open projects
Development

No branches or pull requests

7 participants