New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal: SubResources for CustomResources #913

Merged
merged 1 commit into from Oct 27, 2017

Conversation

@nikhita
Member

nikhita commented Aug 15, 2017

CustomResourceDefinitions (CRDs) were introduced in 1.7. The objects defined by CRDs are called CustomResources (CRs). Currently, we do not provide subresources for CRs.

However, it is one of the most requested features and this proposal seeks to add /status and /scale subresources for CustomResources.

cc @sttts @deads2k @enisoc @bgrant0607 @erictune @lavalamp @brendandburns @philips @liggitt @mbohlool @fabxc @adohe @munnerz

@nikhita

This comment has been minimized.

Show comment
Hide comment
@nikhita
Member

nikhita commented Aug 15, 2017

@sttts

This comment has been minimized.

Show comment
Hide comment
@sttts
Contributor

sttts commented Aug 15, 2017

@nikhita nikhita referenced this pull request Aug 18, 2017

Merged

apiextensions: validation for customresources #47263

13 of 13 tasks complete
@rjeberhard

This comment has been minimized.

Show comment
Hide comment
@rjeberhard

rjeberhard Sep 6, 2017

We are working on an operator and a CRD. With this proposed change and our creating the necessary SubResource, would we then be able to give administrators the option of scaling with HPA?

rjeberhard commented Sep 6, 2017

We are working on an operator and a CRD. With this proposed change and our creating the necessary SubResource, would we then be able to give administrators the option of scaling with HPA?

@enisoc

This comment has been minimized.

Show comment
Hide comment
@enisoc

enisoc Sep 8, 2017

Member

@rjeberhard wrote:

With this proposed change and our creating the necessary SubResource, would we then be able to give administrators the option of scaling with HPA?

Yup, that's the plan. There is another piece in addition to this that also needs to fall into place, though. Currently HPA can only work with things in the extensions API group, but they are planning to make it fully generic so it can work with any object that implements the Scale subresource according to the convention that we're proposing to support here.

Member

enisoc commented Sep 8, 2017

@rjeberhard wrote:

With this proposed change and our creating the necessary SubResource, would we then be able to give administrators the option of scaling with HPA?

Yup, that's the plan. There is another piece in addition to this that also needs to fall into place, though. Currently HPA can only work with things in the extensions API group, but they are planning to make it fully generic so it can work with any object that implements the Scale subresource according to the convention that we're proposing to support here.

@bgrant0607

This comment has been minimized.

Show comment
Hide comment
@bgrant0607

bgrant0607 Oct 6, 2017

Member

@enisoc What is our current plan for this feature? Is anyone working on it in 1.9?

Member

bgrant0607 commented Oct 6, 2017

@enisoc What is our current plan for this feature? Is anyone working on it in 1.9?

@nikhita

This comment has been minimized.

Show comment
Hide comment
@nikhita

nikhita Oct 6, 2017

Member

@bgrant0607 I am working on this feature. It is currently blocked on the decision for polymorphic scale: kubernetes/kubernetes#49504 (comment).

Member

nikhita commented Oct 6, 2017

@bgrant0607 I am working on this feature. It is currently blocked on the decision for polymorphic scale: kubernetes/kubernetes#49504 (comment).

@bgrant0607

This comment has been minimized.

Show comment
Hide comment
@bgrant0607

bgrant0607 Oct 6, 2017

Member

@nikhita Thanks. I'll see what I can do to unblock it.

Member

bgrant0607 commented Oct 6, 2017

@nikhita Thanks. I'll see what I can do to unblock it.

@deads2k

This comment has been minimized.

Show comment
Hide comment
@deads2k

deads2k Oct 26, 2017

Contributor

Needs a few updates, but nothing fundamental. Go ahead and update the features repo to list this for 1.9. Update the doc for

  1. verbs on subresources
  2. GVK for scale
  3. clarify invalid labelselector handling.
Contributor

deads2k commented Oct 26, 2017

Needs a few updates, but nothing fundamental. Go ahead and update the features repo to list this for 1.9. Update the doc for

  1. verbs on subresources
  2. GVK for scale
  3. clarify invalid labelselector handling.
@nikhita

This comment has been minimized.

Show comment
Hide comment
@nikhita

nikhita Oct 26, 2017

Member

@deads2k Updated the proposal. If this lgty, I'll squash the commits.

Also, the proposal needs to be moved to contributors/design-proposals/api-machinery since all the proposals were re-organized recently (this will not show the diffs/comments anymore, so waiting for lgtm).

Member

nikhita commented Oct 26, 2017

@deads2k Updated the proposal. If this lgty, I'll squash the commits.

Also, the proposal needs to be moved to contributors/design-proposals/api-machinery since all the proposals were re-organized recently (this will not show the diffs/comments anymore, so waiting for lgtm).

@deads2k

This comment has been minimized.

Show comment
Hide comment
@deads2k

deads2k Oct 27, 2017

Contributor

lgtm. squash for merge.

Contributor

deads2k commented Oct 27, 2017

lgtm. squash for merge.

@nikhita

This comment has been minimized.

Show comment
Hide comment
@nikhita
Member

nikhita commented Oct 27, 2017

@deads2k Done.

@deads2k

This comment has been minimized.

Show comment
Hide comment
@deads2k

deads2k Oct 27, 2017

Contributor

/lgtm

Contributor

deads2k commented Oct 27, 2017

/lgtm

@k8s-merge-robot

This comment has been minimized.

Show comment
Hide comment
@k8s-merge-robot

k8s-merge-robot Oct 27, 2017

Contributor

/test all [submit-queue is verifying that this PR is safe to merge]

Contributor

k8s-merge-robot commented Oct 27, 2017

/test all [submit-queue is verifying that this PR is safe to merge]

@k8s-merge-robot

This comment has been minimized.

Show comment
Hide comment
@k8s-merge-robot

k8s-merge-robot Oct 27, 2017

Contributor

Automatic merge from submit-queue.

Contributor

k8s-merge-robot commented Oct 27, 2017

Automatic merge from submit-queue.

@k8s-merge-robot k8s-merge-robot merged commit ca27d20 into kubernetes:master Oct 27, 2017

3 checks passed

Submit Queue Running github e2e tests a second time.
Details
cla/linuxfoundation nikhita authorized
Details
pull-community-verify Job succeeded.
Details

k8s-merge-robot added a commit to kubernetes/kubernetes that referenced this pull request Feb 22, 2018

Merge pull request #55168 from nikhita/customresources-subresources
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

apiextensions: add subresources for custom resources

Fixes #38113
Fixes #58778

**Related**:
- Proposal: kubernetes/community#913
- For custom resources to work with `kubectl scale`: #58283

**Add types**:

- Add `CustomResourceSubResources` type to CRD.
    - Fix proto generation for `CustomResourceSubResourceStatus`: #55970.
- Add feature gate for `CustomResourceSubResources`.
    - Update CRD strategy: if feature gate is disabled, this feature is dropped (i.e. set to `nil`).
- Add validation for `CustomResourceSubResources`:
    - `SpecReplicasPath` should not be empty and should be a valid json path under `.spec`. If there is no value under the given path in the CustomResource, the `/scale` subresource will return an error on GET.
    - `StatusReplicasPath` should not be empty and should be a valid json path under `.status`. If there is no value under the given path in the CustomResource, the status replica value in the /scale subresource will default to 0.
    - If present, `LabelSelectorPath` should be a valid json path. If there is no value under `LabelSelectorPath` in the CustomResource, the status label selector value in the `/scale` subresource will default to the empty string.
    - `ScaleGroupVersion` should be `autoscaling/v1`.
    - If `CustomResourceSubResources` is enabled, only `properties` is allowed under the root schema for CRD validation.

**Add status and scale subresources**:

- Use helper functions from `apimachinery/pkg/apis/meta/v1/unstructured/helpers.go`.
    - Improve error handling: #56563, #58215.
- Introduce Registry interface for storage.
- Update storage:
    - Introduce `CustomResourceStorage` which acts as storage for the custom resource and its status and scale subresources. Note: storage for status and scale is only enabled when the feature gate is enabled _and_ the respective fields are enabled in the CRD.
    - Introduce `StatusREST` and its `New()`, `Get()` and `Update()` methods.
    - Introduce `ScaleREST` and its `New()`, `Get()` and `Update()` methods.
        - Get and Update use the json paths from the CRD and use it to return an `autoscaling/v1.Scale` object.
- Update strategy:
    - In `PrepareForCreate`,
         - Clear `.status`.
         - Set `.metadata.generation` = 1
    - In `PrepareForUpdate`,
         - Do not update `.status`.
             - If both the old and new objects have `.status` and it is changed, set it back to its old value.
             - If the old object has a `.status` but the new object doesn't, set it to the old value.
             - If old object did not have a `.status` but the new object does, delete it.
         - Increment generation if spec changes i.e. in the following cases:
             - If both the old and new objects had `.spec` and it changed.
             - If the old object did not have `.spec` but the new object does.
             - If the old object had a `.spec` but the new object doesn't.
     - In `Validate` and `ValidateUpdate`,
        - ensure that values at `specReplicasPath` and `statusReplicasPath` are >=0 and < maxInt32.
        - make sure there are no errors in getting the value at all the paths.
    - Introduce `statusStrategy` with its methods.
        - In `PrepareForUpdate`:
            - Do not update `.spec`.
                - If both the old and new objects have `.spec` and it is changed, set it back to its old value.
                - If the old object has a `.spec` but the new object doesn't, set it to the old value.
                - If old object did not have a `.spec` but the new object does, delete it.
             - Do not update `.metadata`.
        - In `ValidateStatusUpdate`:
            - For CRD validation, validate only under `.status`.
            - Validate value at `statusReplicasPath` as above. If `labelSelectorPath` is a path under `.status`, then validate it as well.
- Plug into the custom resource handler:
    - Store all three storage - customResource, status and scale in `crdInfo`.
    - Use the storage as per the subresource in the request.
    - Use the validator as per the subresource (for status, only use the schema for `status`, if present).
    - Serve the endpoint as per the subresource - see `serveResource`, `serveStatus` and `serveScale`.
- Update discovery by adding the `/status` and `/scale` resources, if enabled.

**Add tests**:

- Add unit tests in `etcd_test.go`.
- Add integration tests.
    - In `subresources_test.go`, use the [polymporphic scale client](https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/client-go/scale) to get and update `Scale`.
    -  Add a test to check everything works fine with yaml in `yaml_test.go`.

**Release note**:

```release-note
`/status` and `/scale` subresources are added for custom resources.
```

k8s-publishing-bot added a commit to kubernetes/client-go that referenced this pull request Feb 27, 2018

Merge pull request #55168 from nikhita/customresources-subresources
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

apiextensions: add subresources for custom resources

Fixes #38113
Fixes #58778

**Related**:
- Proposal: kubernetes/community#913
- For custom resources to work with `kubectl scale`: kubernetes/kubernetes#58283

**Add types**:

- Add `CustomResourceSubResources` type to CRD.
    - Fix proto generation for `CustomResourceSubResourceStatus`: kubernetes/kubernetes#55970.
- Add feature gate for `CustomResourceSubResources`.
    - Update CRD strategy: if feature gate is disabled, this feature is dropped (i.e. set to `nil`).
- Add validation for `CustomResourceSubResources`:
    - `SpecReplicasPath` should not be empty and should be a valid json path under `.spec`. If there is no value under the given path in the CustomResource, the `/scale` subresource will return an error on GET.
    - `StatusReplicasPath` should not be empty and should be a valid json path under `.status`. If there is no value under the given path in the CustomResource, the status replica value in the /scale subresource will default to 0.
    - If present, `LabelSelectorPath` should be a valid json path. If there is no value under `LabelSelectorPath` in the CustomResource, the status label selector value in the `/scale` subresource will default to the empty string.
    - `ScaleGroupVersion` should be `autoscaling/v1`.
    - If `CustomResourceSubResources` is enabled, only `properties` is allowed under the root schema for CRD validation.

**Add status and scale subresources**:

- Use helper functions from `apimachinery/pkg/apis/meta/v1/unstructured/helpers.go`.
    - Improve error handling: kubernetes/kubernetes#56563, kubernetes/kubernetes#58215.
- Introduce Registry interface for storage.
- Update storage:
    - Introduce `CustomResourceStorage` which acts as storage for the custom resource and its status and scale subresources. Note: storage for status and scale is only enabled when the feature gate is enabled _and_ the respective fields are enabled in the CRD.
    - Introduce `StatusREST` and its `New()`, `Get()` and `Update()` methods.
    - Introduce `ScaleREST` and its `New()`, `Get()` and `Update()` methods.
        - Get and Update use the json paths from the CRD and use it to return an `autoscaling/v1.Scale` object.
- Update strategy:
    - In `PrepareForCreate`,
         - Clear `.status`.
         - Set `.metadata.generation` = 1
    - In `PrepareForUpdate`,
         - Do not update `.status`.
             - If both the old and new objects have `.status` and it is changed, set it back to its old value.
             - If the old object has a `.status` but the new object doesn't, set it to the old value.
             - If old object did not have a `.status` but the new object does, delete it.
         - Increment generation if spec changes i.e. in the following cases:
             - If both the old and new objects had `.spec` and it changed.
             - If the old object did not have `.spec` but the new object does.
             - If the old object had a `.spec` but the new object doesn't.
     - In `Validate` and `ValidateUpdate`,
        - ensure that values at `specReplicasPath` and `statusReplicasPath` are >=0 and < maxInt32.
        - make sure there are no errors in getting the value at all the paths.
    - Introduce `statusStrategy` with its methods.
        - In `PrepareForUpdate`:
            - Do not update `.spec`.
                - If both the old and new objects have `.spec` and it is changed, set it back to its old value.
                - If the old object has a `.spec` but the new object doesn't, set it to the old value.
                - If old object did not have a `.spec` but the new object does, delete it.
             - Do not update `.metadata`.
        - In `ValidateStatusUpdate`:
            - For CRD validation, validate only under `.status`.
            - Validate value at `statusReplicasPath` as above. If `labelSelectorPath` is a path under `.status`, then validate it as well.
- Plug into the custom resource handler:
    - Store all three storage - customResource, status and scale in `crdInfo`.
    - Use the storage as per the subresource in the request.
    - Use the validator as per the subresource (for status, only use the schema for `status`, if present).
    - Serve the endpoint as per the subresource - see `serveResource`, `serveStatus` and `serveScale`.
- Update discovery by adding the `/status` and `/scale` resources, if enabled.

**Add tests**:

- Add unit tests in `etcd_test.go`.
- Add integration tests.
    - In `subresources_test.go`, use the [polymporphic scale client](https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/client-go/scale) to get and update `Scale`.
    -  Add a test to check everything works fine with yaml in `yaml_test.go`.

**Release note**:

```release-note
`/status` and `/scale` subresources are added for custom resources.
```

Kubernetes-commit: 6e856480c05424b5cd2cfcbec692a801b856ccb2

k8s-publishing-bot added a commit to kubernetes/client-go that referenced this pull request Feb 27, 2018

Merge pull request #55168 from nikhita/customresources-subresources
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

apiextensions: add subresources for custom resources

Fixes #38113
Fixes #58778

**Related**:
- Proposal: kubernetes/community#913
- For custom resources to work with `kubectl scale`: kubernetes/kubernetes#58283

**Add types**:

- Add `CustomResourceSubResources` type to CRD.
    - Fix proto generation for `CustomResourceSubResourceStatus`: kubernetes/kubernetes#55970.
- Add feature gate for `CustomResourceSubResources`.
    - Update CRD strategy: if feature gate is disabled, this feature is dropped (i.e. set to `nil`).
- Add validation for `CustomResourceSubResources`:
    - `SpecReplicasPath` should not be empty and should be a valid json path under `.spec`. If there is no value under the given path in the CustomResource, the `/scale` subresource will return an error on GET.
    - `StatusReplicasPath` should not be empty and should be a valid json path under `.status`. If there is no value under the given path in the CustomResource, the status replica value in the /scale subresource will default to 0.
    - If present, `LabelSelectorPath` should be a valid json path. If there is no value under `LabelSelectorPath` in the CustomResource, the status label selector value in the `/scale` subresource will default to the empty string.
    - `ScaleGroupVersion` should be `autoscaling/v1`.
    - If `CustomResourceSubResources` is enabled, only `properties` is allowed under the root schema for CRD validation.

**Add status and scale subresources**:

- Use helper functions from `apimachinery/pkg/apis/meta/v1/unstructured/helpers.go`.
    - Improve error handling: kubernetes/kubernetes#56563, kubernetes/kubernetes#58215.
- Introduce Registry interface for storage.
- Update storage:
    - Introduce `CustomResourceStorage` which acts as storage for the custom resource and its status and scale subresources. Note: storage for status and scale is only enabled when the feature gate is enabled _and_ the respective fields are enabled in the CRD.
    - Introduce `StatusREST` and its `New()`, `Get()` and `Update()` methods.
    - Introduce `ScaleREST` and its `New()`, `Get()` and `Update()` methods.
        - Get and Update use the json paths from the CRD and use it to return an `autoscaling/v1.Scale` object.
- Update strategy:
    - In `PrepareForCreate`,
         - Clear `.status`.
         - Set `.metadata.generation` = 1
    - In `PrepareForUpdate`,
         - Do not update `.status`.
             - If both the old and new objects have `.status` and it is changed, set it back to its old value.
             - If the old object has a `.status` but the new object doesn't, set it to the old value.
             - If old object did not have a `.status` but the new object does, delete it.
         - Increment generation if spec changes i.e. in the following cases:
             - If both the old and new objects had `.spec` and it changed.
             - If the old object did not have `.spec` but the new object does.
             - If the old object had a `.spec` but the new object doesn't.
     - In `Validate` and `ValidateUpdate`,
        - ensure that values at `specReplicasPath` and `statusReplicasPath` are >=0 and < maxInt32.
        - make sure there are no errors in getting the value at all the paths.
    - Introduce `statusStrategy` with its methods.
        - In `PrepareForUpdate`:
            - Do not update `.spec`.
                - If both the old and new objects have `.spec` and it is changed, set it back to its old value.
                - If the old object has a `.spec` but the new object doesn't, set it to the old value.
                - If old object did not have a `.spec` but the new object does, delete it.
             - Do not update `.metadata`.
        - In `ValidateStatusUpdate`:
            - For CRD validation, validate only under `.status`.
            - Validate value at `statusReplicasPath` as above. If `labelSelectorPath` is a path under `.status`, then validate it as well.
- Plug into the custom resource handler:
    - Store all three storage - customResource, status and scale in `crdInfo`.
    - Use the storage as per the subresource in the request.
    - Use the validator as per the subresource (for status, only use the schema for `status`, if present).
    - Serve the endpoint as per the subresource - see `serveResource`, `serveStatus` and `serveScale`.
- Update discovery by adding the `/status` and `/scale` resources, if enabled.

**Add tests**:

- Add unit tests in `etcd_test.go`.
- Add integration tests.
    - In `subresources_test.go`, use the [polymporphic scale client](https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/client-go/scale) to get and update `Scale`.
    -  Add a test to check everything works fine with yaml in `yaml_test.go`.

**Release note**:

```release-note
`/status` and `/scale` subresources are added for custom resources.
```

Kubernetes-commit: 6e856480c05424b5cd2cfcbec692a801b856ccb2

k8s-publishing-bot added a commit to kubernetes/apiextensions-apiserver that referenced this pull request Feb 27, 2018

Merge pull request #55168 from nikhita/customresources-subresources
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

apiextensions: add subresources for custom resources

Fixes #38113
Fixes #58778

**Related**:
- Proposal: kubernetes/community#913
- For custom resources to work with `kubectl scale`: kubernetes/kubernetes#58283

**Add types**:

- Add `CustomResourceSubResources` type to CRD.
    - Fix proto generation for `CustomResourceSubResourceStatus`: kubernetes/kubernetes#55970.
- Add feature gate for `CustomResourceSubResources`.
    - Update CRD strategy: if feature gate is disabled, this feature is dropped (i.e. set to `nil`).
- Add validation for `CustomResourceSubResources`:
    - `SpecReplicasPath` should not be empty and should be a valid json path under `.spec`. If there is no value under the given path in the CustomResource, the `/scale` subresource will return an error on GET.
    - `StatusReplicasPath` should not be empty and should be a valid json path under `.status`. If there is no value under the given path in the CustomResource, the status replica value in the /scale subresource will default to 0.
    - If present, `LabelSelectorPath` should be a valid json path. If there is no value under `LabelSelectorPath` in the CustomResource, the status label selector value in the `/scale` subresource will default to the empty string.
    - `ScaleGroupVersion` should be `autoscaling/v1`.
    - If `CustomResourceSubResources` is enabled, only `properties` is allowed under the root schema for CRD validation.

**Add status and scale subresources**:

- Use helper functions from `apimachinery/pkg/apis/meta/v1/unstructured/helpers.go`.
    - Improve error handling: kubernetes/kubernetes#56563, kubernetes/kubernetes#58215.
- Introduce Registry interface for storage.
- Update storage:
    - Introduce `CustomResourceStorage` which acts as storage for the custom resource and its status and scale subresources. Note: storage for status and scale is only enabled when the feature gate is enabled _and_ the respective fields are enabled in the CRD.
    - Introduce `StatusREST` and its `New()`, `Get()` and `Update()` methods.
    - Introduce `ScaleREST` and its `New()`, `Get()` and `Update()` methods.
        - Get and Update use the json paths from the CRD and use it to return an `autoscaling/v1.Scale` object.
- Update strategy:
    - In `PrepareForCreate`,
         - Clear `.status`.
         - Set `.metadata.generation` = 1
    - In `PrepareForUpdate`,
         - Do not update `.status`.
             - If both the old and new objects have `.status` and it is changed, set it back to its old value.
             - If the old object has a `.status` but the new object doesn't, set it to the old value.
             - If old object did not have a `.status` but the new object does, delete it.
         - Increment generation if spec changes i.e. in the following cases:
             - If both the old and new objects had `.spec` and it changed.
             - If the old object did not have `.spec` but the new object does.
             - If the old object had a `.spec` but the new object doesn't.
     - In `Validate` and `ValidateUpdate`,
        - ensure that values at `specReplicasPath` and `statusReplicasPath` are >=0 and < maxInt32.
        - make sure there are no errors in getting the value at all the paths.
    - Introduce `statusStrategy` with its methods.
        - In `PrepareForUpdate`:
            - Do not update `.spec`.
                - If both the old and new objects have `.spec` and it is changed, set it back to its old value.
                - If the old object has a `.spec` but the new object doesn't, set it to the old value.
                - If old object did not have a `.spec` but the new object does, delete it.
             - Do not update `.metadata`.
        - In `ValidateStatusUpdate`:
            - For CRD validation, validate only under `.status`.
            - Validate value at `statusReplicasPath` as above. If `labelSelectorPath` is a path under `.status`, then validate it as well.
- Plug into the custom resource handler:
    - Store all three storage - customResource, status and scale in `crdInfo`.
    - Use the storage as per the subresource in the request.
    - Use the validator as per the subresource (for status, only use the schema for `status`, if present).
    - Serve the endpoint as per the subresource - see `serveResource`, `serveStatus` and `serveScale`.
- Update discovery by adding the `/status` and `/scale` resources, if enabled.

**Add tests**:

- Add unit tests in `etcd_test.go`.
- Add integration tests.
    - In `subresources_test.go`, use the [polymporphic scale client](https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/client-go/scale) to get and update `Scale`.
    -  Add a test to check everything works fine with yaml in `yaml_test.go`.

**Release note**:

```release-note
`/status` and `/scale` subresources are added for custom resources.
```

Kubernetes-commit: 6e856480c05424b5cd2cfcbec692a801b856ccb2

k8s-publishing-bot added a commit to kubernetes/apiextensions-apiserver that referenced this pull request Feb 27, 2018

Merge pull request #55168 from nikhita/customresources-subresources
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

apiextensions: add subresources for custom resources

Fixes #38113
Fixes #58778

**Related**:
- Proposal: kubernetes/community#913
- For custom resources to work with `kubectl scale`: kubernetes/kubernetes#58283

**Add types**:

- Add `CustomResourceSubResources` type to CRD.
    - Fix proto generation for `CustomResourceSubResourceStatus`: kubernetes/kubernetes#55970.
- Add feature gate for `CustomResourceSubResources`.
    - Update CRD strategy: if feature gate is disabled, this feature is dropped (i.e. set to `nil`).
- Add validation for `CustomResourceSubResources`:
    - `SpecReplicasPath` should not be empty and should be a valid json path under `.spec`. If there is no value under the given path in the CustomResource, the `/scale` subresource will return an error on GET.
    - `StatusReplicasPath` should not be empty and should be a valid json path under `.status`. If there is no value under the given path in the CustomResource, the status replica value in the /scale subresource will default to 0.
    - If present, `LabelSelectorPath` should be a valid json path. If there is no value under `LabelSelectorPath` in the CustomResource, the status label selector value in the `/scale` subresource will default to the empty string.
    - `ScaleGroupVersion` should be `autoscaling/v1`.
    - If `CustomResourceSubResources` is enabled, only `properties` is allowed under the root schema for CRD validation.

**Add status and scale subresources**:

- Use helper functions from `apimachinery/pkg/apis/meta/v1/unstructured/helpers.go`.
    - Improve error handling: kubernetes/kubernetes#56563, kubernetes/kubernetes#58215.
- Introduce Registry interface for storage.
- Update storage:
    - Introduce `CustomResourceStorage` which acts as storage for the custom resource and its status and scale subresources. Note: storage for status and scale is only enabled when the feature gate is enabled _and_ the respective fields are enabled in the CRD.
    - Introduce `StatusREST` and its `New()`, `Get()` and `Update()` methods.
    - Introduce `ScaleREST` and its `New()`, `Get()` and `Update()` methods.
        - Get and Update use the json paths from the CRD and use it to return an `autoscaling/v1.Scale` object.
- Update strategy:
    - In `PrepareForCreate`,
         - Clear `.status`.
         - Set `.metadata.generation` = 1
    - In `PrepareForUpdate`,
         - Do not update `.status`.
             - If both the old and new objects have `.status` and it is changed, set it back to its old value.
             - If the old object has a `.status` but the new object doesn't, set it to the old value.
             - If old object did not have a `.status` but the new object does, delete it.
         - Increment generation if spec changes i.e. in the following cases:
             - If both the old and new objects had `.spec` and it changed.
             - If the old object did not have `.spec` but the new object does.
             - If the old object had a `.spec` but the new object doesn't.
     - In `Validate` and `ValidateUpdate`,
        - ensure that values at `specReplicasPath` and `statusReplicasPath` are >=0 and < maxInt32.
        - make sure there are no errors in getting the value at all the paths.
    - Introduce `statusStrategy` with its methods.
        - In `PrepareForUpdate`:
            - Do not update `.spec`.
                - If both the old and new objects have `.spec` and it is changed, set it back to its old value.
                - If the old object has a `.spec` but the new object doesn't, set it to the old value.
                - If old object did not have a `.spec` but the new object does, delete it.
             - Do not update `.metadata`.
        - In `ValidateStatusUpdate`:
            - For CRD validation, validate only under `.status`.
            - Validate value at `statusReplicasPath` as above. If `labelSelectorPath` is a path under `.status`, then validate it as well.
- Plug into the custom resource handler:
    - Store all three storage - customResource, status and scale in `crdInfo`.
    - Use the storage as per the subresource in the request.
    - Use the validator as per the subresource (for status, only use the schema for `status`, if present).
    - Serve the endpoint as per the subresource - see `serveResource`, `serveStatus` and `serveScale`.
- Update discovery by adding the `/status` and `/scale` resources, if enabled.

**Add tests**:

- Add unit tests in `etcd_test.go`.
- Add integration tests.
    - In `subresources_test.go`, use the [polymporphic scale client](https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/client-go/scale) to get and update `Scale`.
    -  Add a test to check everything works fine with yaml in `yaml_test.go`.

**Release note**:

```release-note
`/status` and `/scale` subresources are added for custom resources.
```

Kubernetes-commit: 6e856480c05424b5cd2cfcbec692a801b856ccb2

sttts pushed a commit to sttts/client-go that referenced this pull request Mar 1, 2018

Merge pull request #55168 from nikhita/customresources-subresources
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

apiextensions: add subresources for custom resources

Fixes #38113
Fixes #58778

**Related**:
- Proposal: kubernetes/community#913
- For custom resources to work with `kubectl scale`: kubernetes/kubernetes#58283

**Add types**:

- Add `CustomResourceSubResources` type to CRD.
    - Fix proto generation for `CustomResourceSubResourceStatus`: kubernetes/kubernetes#55970.
- Add feature gate for `CustomResourceSubResources`.
    - Update CRD strategy: if feature gate is disabled, this feature is dropped (i.e. set to `nil`).
- Add validation for `CustomResourceSubResources`:
    - `SpecReplicasPath` should not be empty and should be a valid json path under `.spec`. If there is no value under the given path in the CustomResource, the `/scale` subresource will return an error on GET.
    - `StatusReplicasPath` should not be empty and should be a valid json path under `.status`. If there is no value under the given path in the CustomResource, the status replica value in the /scale subresource will default to 0.
    - If present, `LabelSelectorPath` should be a valid json path. If there is no value under `LabelSelectorPath` in the CustomResource, the status label selector value in the `/scale` subresource will default to the empty string.
    - `ScaleGroupVersion` should be `autoscaling/v1`.
    - If `CustomResourceSubResources` is enabled, only `properties` is allowed under the root schema for CRD validation.

**Add status and scale subresources**:

- Use helper functions from `apimachinery/pkg/apis/meta/v1/unstructured/helpers.go`.
    - Improve error handling: kubernetes/kubernetes#56563, kubernetes/kubernetes#58215.
- Introduce Registry interface for storage.
- Update storage:
    - Introduce `CustomResourceStorage` which acts as storage for the custom resource and its status and scale subresources. Note: storage for status and scale is only enabled when the feature gate is enabled _and_ the respective fields are enabled in the CRD.
    - Introduce `StatusREST` and its `New()`, `Get()` and `Update()` methods.
    - Introduce `ScaleREST` and its `New()`, `Get()` and `Update()` methods.
        - Get and Update use the json paths from the CRD and use it to return an `autoscaling/v1.Scale` object.
- Update strategy:
    - In `PrepareForCreate`,
         - Clear `.status`.
         - Set `.metadata.generation` = 1
    - In `PrepareForUpdate`,
         - Do not update `.status`.
             - If both the old and new objects have `.status` and it is changed, set it back to its old value.
             - If the old object has a `.status` but the new object doesn't, set it to the old value.
             - If old object did not have a `.status` but the new object does, delete it.
         - Increment generation if spec changes i.e. in the following cases:
             - If both the old and new objects had `.spec` and it changed.
             - If the old object did not have `.spec` but the new object does.
             - If the old object had a `.spec` but the new object doesn't.
     - In `Validate` and `ValidateUpdate`,
        - ensure that values at `specReplicasPath` and `statusReplicasPath` are >=0 and < maxInt32.
        - make sure there are no errors in getting the value at all the paths.
    - Introduce `statusStrategy` with its methods.
        - In `PrepareForUpdate`:
            - Do not update `.spec`.
                - If both the old and new objects have `.spec` and it is changed, set it back to its old value.
                - If the old object has a `.spec` but the new object doesn't, set it to the old value.
                - If old object did not have a `.spec` but the new object does, delete it.
             - Do not update `.metadata`.
        - In `ValidateStatusUpdate`:
            - For CRD validation, validate only under `.status`.
            - Validate value at `statusReplicasPath` as above. If `labelSelectorPath` is a path under `.status`, then validate it as well.
- Plug into the custom resource handler:
    - Store all three storage - customResource, status and scale in `crdInfo`.
    - Use the storage as per the subresource in the request.
    - Use the validator as per the subresource (for status, only use the schema for `status`, if present).
    - Serve the endpoint as per the subresource - see `serveResource`, `serveStatus` and `serveScale`.
- Update discovery by adding the `/status` and `/scale` resources, if enabled.

**Add tests**:

- Add unit tests in `etcd_test.go`.
- Add integration tests.
    - In `subresources_test.go`, use the [polymporphic scale client](https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/client-go/scale) to get and update `Scale`.
    -  Add a test to check everything works fine with yaml in `yaml_test.go`.

**Release note**:

```release-note
`/status` and `/scale` subresources are added for custom resources.
```

Kubernetes-commit: 6e856480c05424b5cd2cfcbec692a801b856ccb2

sttts pushed a commit to sttts/client-go that referenced this pull request Mar 1, 2018

Merge pull request #55168 from nikhita/customresources-subresources
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

apiextensions: add subresources for custom resources

Fixes #38113
Fixes #58778

**Related**:
- Proposal: kubernetes/community#913
- For custom resources to work with `kubectl scale`: kubernetes/kubernetes#58283

**Add types**:

- Add `CustomResourceSubResources` type to CRD.
    - Fix proto generation for `CustomResourceSubResourceStatus`: kubernetes/kubernetes#55970.
- Add feature gate for `CustomResourceSubResources`.
    - Update CRD strategy: if feature gate is disabled, this feature is dropped (i.e. set to `nil`).
- Add validation for `CustomResourceSubResources`:
    - `SpecReplicasPath` should not be empty and should be a valid json path under `.spec`. If there is no value under the given path in the CustomResource, the `/scale` subresource will return an error on GET.
    - `StatusReplicasPath` should not be empty and should be a valid json path under `.status`. If there is no value under the given path in the CustomResource, the status replica value in the /scale subresource will default to 0.
    - If present, `LabelSelectorPath` should be a valid json path. If there is no value under `LabelSelectorPath` in the CustomResource, the status label selector value in the `/scale` subresource will default to the empty string.
    - `ScaleGroupVersion` should be `autoscaling/v1`.
    - If `CustomResourceSubResources` is enabled, only `properties` is allowed under the root schema for CRD validation.

**Add status and scale subresources**:

- Use helper functions from `apimachinery/pkg/apis/meta/v1/unstructured/helpers.go`.
    - Improve error handling: kubernetes/kubernetes#56563, kubernetes/kubernetes#58215.
- Introduce Registry interface for storage.
- Update storage:
    - Introduce `CustomResourceStorage` which acts as storage for the custom resource and its status and scale subresources. Note: storage for status and scale is only enabled when the feature gate is enabled _and_ the respective fields are enabled in the CRD.
    - Introduce `StatusREST` and its `New()`, `Get()` and `Update()` methods.
    - Introduce `ScaleREST` and its `New()`, `Get()` and `Update()` methods.
        - Get and Update use the json paths from the CRD and use it to return an `autoscaling/v1.Scale` object.
- Update strategy:
    - In `PrepareForCreate`,
         - Clear `.status`.
         - Set `.metadata.generation` = 1
    - In `PrepareForUpdate`,
         - Do not update `.status`.
             - If both the old and new objects have `.status` and it is changed, set it back to its old value.
             - If the old object has a `.status` but the new object doesn't, set it to the old value.
             - If old object did not have a `.status` but the new object does, delete it.
         - Increment generation if spec changes i.e. in the following cases:
             - If both the old and new objects had `.spec` and it changed.
             - If the old object did not have `.spec` but the new object does.
             - If the old object had a `.spec` but the new object doesn't.
     - In `Validate` and `ValidateUpdate`,
        - ensure that values at `specReplicasPath` and `statusReplicasPath` are >=0 and < maxInt32.
        - make sure there are no errors in getting the value at all the paths.
    - Introduce `statusStrategy` with its methods.
        - In `PrepareForUpdate`:
            - Do not update `.spec`.
                - If both the old and new objects have `.spec` and it is changed, set it back to its old value.
                - If the old object has a `.spec` but the new object doesn't, set it to the old value.
                - If old object did not have a `.spec` but the new object does, delete it.
             - Do not update `.metadata`.
        - In `ValidateStatusUpdate`:
            - For CRD validation, validate only under `.status`.
            - Validate value at `statusReplicasPath` as above. If `labelSelectorPath` is a path under `.status`, then validate it as well.
- Plug into the custom resource handler:
    - Store all three storage - customResource, status and scale in `crdInfo`.
    - Use the storage as per the subresource in the request.
    - Use the validator as per the subresource (for status, only use the schema for `status`, if present).
    - Serve the endpoint as per the subresource - see `serveResource`, `serveStatus` and `serveScale`.
- Update discovery by adding the `/status` and `/scale` resources, if enabled.

**Add tests**:

- Add unit tests in `etcd_test.go`.
- Add integration tests.
    - In `subresources_test.go`, use the [polymporphic scale client](https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/client-go/scale) to get and update `Scale`.
    -  Add a test to check everything works fine with yaml in `yaml_test.go`.

**Release note**:

```release-note
`/status` and `/scale` subresources are added for custom resources.
```

Kubernetes-commit: 6e856480c05424b5cd2cfcbec692a801b856ccb2

sttts pushed a commit to sttts/apiextensions-apiserver that referenced this pull request Mar 1, 2018

Merge pull request #55168 from nikhita/customresources-subresources
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

apiextensions: add subresources for custom resources

Fixes #38113
Fixes #58778

**Related**:
- Proposal: kubernetes/community#913
- For custom resources to work with `kubectl scale`: kubernetes/kubernetes#58283

**Add types**:

- Add `CustomResourceSubResources` type to CRD.
    - Fix proto generation for `CustomResourceSubResourceStatus`: kubernetes/kubernetes#55970.
- Add feature gate for `CustomResourceSubResources`.
    - Update CRD strategy: if feature gate is disabled, this feature is dropped (i.e. set to `nil`).
- Add validation for `CustomResourceSubResources`:
    - `SpecReplicasPath` should not be empty and should be a valid json path under `.spec`. If there is no value under the given path in the CustomResource, the `/scale` subresource will return an error on GET.
    - `StatusReplicasPath` should not be empty and should be a valid json path under `.status`. If there is no value under the given path in the CustomResource, the status replica value in the /scale subresource will default to 0.
    - If present, `LabelSelectorPath` should be a valid json path. If there is no value under `LabelSelectorPath` in the CustomResource, the status label selector value in the `/scale` subresource will default to the empty string.
    - `ScaleGroupVersion` should be `autoscaling/v1`.
    - If `CustomResourceSubResources` is enabled, only `properties` is allowed under the root schema for CRD validation.

**Add status and scale subresources**:

- Use helper functions from `apimachinery/pkg/apis/meta/v1/unstructured/helpers.go`.
    - Improve error handling: kubernetes/kubernetes#56563, kubernetes/kubernetes#58215.
- Introduce Registry interface for storage.
- Update storage:
    - Introduce `CustomResourceStorage` which acts as storage for the custom resource and its status and scale subresources. Note: storage for status and scale is only enabled when the feature gate is enabled _and_ the respective fields are enabled in the CRD.
    - Introduce `StatusREST` and its `New()`, `Get()` and `Update()` methods.
    - Introduce `ScaleREST` and its `New()`, `Get()` and `Update()` methods.
        - Get and Update use the json paths from the CRD and use it to return an `autoscaling/v1.Scale` object.
- Update strategy:
    - In `PrepareForCreate`,
         - Clear `.status`.
         - Set `.metadata.generation` = 1
    - In `PrepareForUpdate`,
         - Do not update `.status`.
             - If both the old and new objects have `.status` and it is changed, set it back to its old value.
             - If the old object has a `.status` but the new object doesn't, set it to the old value.
             - If old object did not have a `.status` but the new object does, delete it.
         - Increment generation if spec changes i.e. in the following cases:
             - If both the old and new objects had `.spec` and it changed.
             - If the old object did not have `.spec` but the new object does.
             - If the old object had a `.spec` but the new object doesn't.
     - In `Validate` and `ValidateUpdate`,
        - ensure that values at `specReplicasPath` and `statusReplicasPath` are >=0 and < maxInt32.
        - make sure there are no errors in getting the value at all the paths.
    - Introduce `statusStrategy` with its methods.
        - In `PrepareForUpdate`:
            - Do not update `.spec`.
                - If both the old and new objects have `.spec` and it is changed, set it back to its old value.
                - If the old object has a `.spec` but the new object doesn't, set it to the old value.
                - If old object did not have a `.spec` but the new object does, delete it.
             - Do not update `.metadata`.
        - In `ValidateStatusUpdate`:
            - For CRD validation, validate only under `.status`.
            - Validate value at `statusReplicasPath` as above. If `labelSelectorPath` is a path under `.status`, then validate it as well.
- Plug into the custom resource handler:
    - Store all three storage - customResource, status and scale in `crdInfo`.
    - Use the storage as per the subresource in the request.
    - Use the validator as per the subresource (for status, only use the schema for `status`, if present).
    - Serve the endpoint as per the subresource - see `serveResource`, `serveStatus` and `serveScale`.
- Update discovery by adding the `/status` and `/scale` resources, if enabled.

**Add tests**:

- Add unit tests in `etcd_test.go`.
- Add integration tests.
    - In `subresources_test.go`, use the [polymporphic scale client](https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/client-go/scale) to get and update `Scale`.
    -  Add a test to check everything works fine with yaml in `yaml_test.go`.

**Release note**:

```release-note
`/status` and `/scale` subresources are added for custom resources.
```

Kubernetes-commit: 6e856480c05424b5cd2cfcbec692a801b856ccb2

sttts pushed a commit to sttts/apiextensions-apiserver that referenced this pull request Mar 1, 2018

Merge pull request #55168 from nikhita/customresources-subresources
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

apiextensions: add subresources for custom resources

Fixes #38113
Fixes #58778

**Related**:
- Proposal: kubernetes/community#913
- For custom resources to work with `kubectl scale`: kubernetes/kubernetes#58283

**Add types**:

- Add `CustomResourceSubResources` type to CRD.
    - Fix proto generation for `CustomResourceSubResourceStatus`: kubernetes/kubernetes#55970.
- Add feature gate for `CustomResourceSubResources`.
    - Update CRD strategy: if feature gate is disabled, this feature is dropped (i.e. set to `nil`).
- Add validation for `CustomResourceSubResources`:
    - `SpecReplicasPath` should not be empty and should be a valid json path under `.spec`. If there is no value under the given path in the CustomResource, the `/scale` subresource will return an error on GET.
    - `StatusReplicasPath` should not be empty and should be a valid json path under `.status`. If there is no value under the given path in the CustomResource, the status replica value in the /scale subresource will default to 0.
    - If present, `LabelSelectorPath` should be a valid json path. If there is no value under `LabelSelectorPath` in the CustomResource, the status label selector value in the `/scale` subresource will default to the empty string.
    - `ScaleGroupVersion` should be `autoscaling/v1`.
    - If `CustomResourceSubResources` is enabled, only `properties` is allowed under the root schema for CRD validation.

**Add status and scale subresources**:

- Use helper functions from `apimachinery/pkg/apis/meta/v1/unstructured/helpers.go`.
    - Improve error handling: kubernetes/kubernetes#56563, kubernetes/kubernetes#58215.
- Introduce Registry interface for storage.
- Update storage:
    - Introduce `CustomResourceStorage` which acts as storage for the custom resource and its status and scale subresources. Note: storage for status and scale is only enabled when the feature gate is enabled _and_ the respective fields are enabled in the CRD.
    - Introduce `StatusREST` and its `New()`, `Get()` and `Update()` methods.
    - Introduce `ScaleREST` and its `New()`, `Get()` and `Update()` methods.
        - Get and Update use the json paths from the CRD and use it to return an `autoscaling/v1.Scale` object.
- Update strategy:
    - In `PrepareForCreate`,
         - Clear `.status`.
         - Set `.metadata.generation` = 1
    - In `PrepareForUpdate`,
         - Do not update `.status`.
             - If both the old and new objects have `.status` and it is changed, set it back to its old value.
             - If the old object has a `.status` but the new object doesn't, set it to the old value.
             - If old object did not have a `.status` but the new object does, delete it.
         - Increment generation if spec changes i.e. in the following cases:
             - If both the old and new objects had `.spec` and it changed.
             - If the old object did not have `.spec` but the new object does.
             - If the old object had a `.spec` but the new object doesn't.
     - In `Validate` and `ValidateUpdate`,
        - ensure that values at `specReplicasPath` and `statusReplicasPath` are >=0 and < maxInt32.
        - make sure there are no errors in getting the value at all the paths.
    - Introduce `statusStrategy` with its methods.
        - In `PrepareForUpdate`:
            - Do not update `.spec`.
                - If both the old and new objects have `.spec` and it is changed, set it back to its old value.
                - If the old object has a `.spec` but the new object doesn't, set it to the old value.
                - If old object did not have a `.spec` but the new object does, delete it.
             - Do not update `.metadata`.
        - In `ValidateStatusUpdate`:
            - For CRD validation, validate only under `.status`.
            - Validate value at `statusReplicasPath` as above. If `labelSelectorPath` is a path under `.status`, then validate it as well.
- Plug into the custom resource handler:
    - Store all three storage - customResource, status and scale in `crdInfo`.
    - Use the storage as per the subresource in the request.
    - Use the validator as per the subresource (for status, only use the schema for `status`, if present).
    - Serve the endpoint as per the subresource - see `serveResource`, `serveStatus` and `serveScale`.
- Update discovery by adding the `/status` and `/scale` resources, if enabled.

**Add tests**:

- Add unit tests in `etcd_test.go`.
- Add integration tests.
    - In `subresources_test.go`, use the [polymporphic scale client](https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/client-go/scale) to get and update `Scale`.
    -  Add a test to check everything works fine with yaml in `yaml_test.go`.

**Release note**:

```release-note
`/status` and `/scale` subresources are added for custom resources.
```

Kubernetes-commit: 6e856480c05424b5cd2cfcbec692a801b856ccb2
@abbi-gaurav

This comment has been minimized.

Show comment
Hide comment
@abbi-gaurav

abbi-gaurav Mar 16, 2018

Is there any related issue for the status implementation. I can see one for the scale though.

abbi-gaurav commented Mar 16, 2018

Is there any related issue for the status implementation. I can see one for the scale though.

// object sent as the payload for /scale. It allows transition
// to future versions easily.
// Today only autoscaling/v1 is allowed.
ScaleGroupVersion schema.GroupVersion `json:"groupVersion"`

This comment has been minimized.

@lavalamp

lavalamp Mar 16, 2018

Member

I thought we agreed to remove this field in another thread? Did I miss some followup?

@lavalamp

lavalamp Mar 16, 2018

Member

I thought we agreed to remove this field in another thread? Did I miss some followup?

This comment has been minimized.

@nikhita

nikhita Mar 18, 2018

Member

I thought we agreed to remove this field in another thread? Did I miss some followup?

It is removed. 👍

The proposal included this field. It was later decided in the implementation PR to remove it and so it was removed there.

Only the proposal contains it. It needs to be updated (will do).

@nikhita

nikhita Mar 18, 2018

Member

I thought we agreed to remove this field in another thread? Did I miss some followup?

It is removed. 👍

The proposal included this field. It was later decided in the implementation PR to remove it and so it was removed there.

Only the proposal contains it. It needs to be updated (will do).

@nikhita

This comment has been minimized.

Show comment
Hide comment
@nikhita

nikhita Mar 18, 2018

Member

Is there any related issue for the status implementation. I can see one for the scale though.

@abbi-gaurav kubernetes/kubernetes#55168 contains the implementation for scale as well as status. You can find docs on how to use status here: https://deploy-preview-7439--kubernetes-io-vnext-staging.netlify.com/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/#status-subresource.

Member

nikhita commented Mar 18, 2018

Is there any related issue for the status implementation. I can see one for the scale though.

@abbi-gaurav kubernetes/kubernetes#55168 contains the implementation for scale as well as status. You can find docs on how to use status here: https://deploy-preview-7439--kubernetes-io-vnext-staging.netlify.com/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/#status-subresource.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment