Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal: SubResources for CustomResources #913

Merged

Conversation

@nikhita
Copy link
Member

commented Aug 15, 2017

CustomResourceDefinitions (CRDs) were introduced in 1.7. The objects defined by CRDs are called CustomResources (CRs). Currently, we do not provide subresources for CRs.

However, it is one of the most requested features and this proposal seeks to add /status and /scale subresources for CustomResources.

cc @sttts @deads2k @enisoc @bgrant0607 @erictune @lavalamp @brendandburns @philips @liggitt @mbohlool @fabxc @adohe @munnerz

@nikhita

This comment has been minimized.

Copy link
Member Author

commented Aug 15, 2017

@sttts

This comment has been minimized.

Copy link
Contributor

commented Aug 15, 2017

@nikhita nikhita force-pushed the nikhita:customresources-subresources branch from 4695745 to c642c92 Aug 15, 2017

## Non-Goals

1. Allow defining arbitrary subresources i.e. subresources except `/status` and `/scale`.
2. Unify the many `Scale` types: as long as there is no generic `Scale` object in Kubernetes, we will propose to introduce yet another `Scale` type in the `apiextensions.k8s.io` api group. If Kubernetes switches to a generic `Scale` object, `apiextensions.k8s.io` will follow.

This comment has been minimized.

Copy link
@liggitt

liggitt Aug 15, 2017

Member

Making scale operate with a different API type will prevent it from working with HPA. I'd expect it to use the same API type

This comment has been minimized.

Copy link
@sttts

sttts Aug 15, 2017

Contributor

Same in which sense? Same GroupVersion? Extensionally equal?

This comment has been minimized.

Copy link
@enisoc

enisoc Aug 15, 2017

Member

@foxish Is the plan to move everything to autoscaling.Scale?

This comment has been minimized.

Copy link
@deads2k

deads2k Aug 18, 2017

Contributor

HPA should be able to speak multiple versions of the scale subresource exposed via discovery: kubernetes/kubernetes#49971 . I think I'd rather have autoscaling depend on apiextensions than the other way around.

This comment has been minimized.

Copy link
@bgrant0607

bgrant0607 Oct 6, 2017

Member

An aside:

The main advantage of subresources is that they enable distinct authorization policies for the subresource endpoints.

To enable just polymorphism of spec and status, I want to make duck typing work, more like our metadata convention. That would be both a simpler mechanism and would be compatible with client-based tools. kubectl set replicas should be able to set the replicas field of any API that has a replicas field.

// StatusPath is restricted to be “.status” and defaults to “.status”.
StatusPath string `json:“statusPath,omitempty”`
// The JSON path (e.g. “.spec”) of the spec of a CustomResource.
// SpecPath is restricted to be “.spec” and defaults to “.spec”.

This comment has been minimized.

Copy link
@liggitt

liggitt Aug 15, 2017

Member

Is there a reason to make spec/status paths be specified if they are locked to a single path?

This comment has been minimized.

Copy link
@sttts

sttts Aug 15, 2017

Contributor

We want the possibility to add fields later (i.e. no bool, but a struct) and we want to avoid an empty struct. Any better idea?

This comment has been minimized.

Copy link
@deads2k

deads2k Aug 18, 2017

Contributor

We want the possibility to add fields later (i.e. no bool, but a struct) and we want to avoid an empty struct. Any better idea?

If you make it a pointer (like it is), doesn't the presence or absence of the field trigger you?

This comment has been minimized.

Copy link
@sttts

sttts Aug 18, 2017

Contributor

we can choose to have fixed paths or what we have for EmptyDir today: StatusSubResource: {}.

Which way to we like more? I don't like the {} notation, but that probably the way to go in YAML, as ugly as it is.

This comment has been minimized.

Copy link
@deads2k

deads2k Aug 18, 2017

Contributor

Which way to we like more? I don't like the {} notation, but that probably the way to go in YAML, as ugly as it is.

I prefer the {} to fields that require specific values as a marker.

This comment has been minimized.

Copy link
@bgrant0607

bgrant0607 Oct 6, 2017

Member

Add fields to what? Going back to @liggitt's original question: I do not think the spec and status paths should be configurable.

This comment has been minimized.

Copy link
@nikhita

nikhita Oct 6, 2017

Author Member

Update the proposal. Now, status and spec paths are hardcoded to .status and .spec respectively and are no longer configurable.

StatusReplicasPath string `json:“statusReplicasPath,omitempty”`
// optional, e.g. “.spec.labelSelector”.
// Only JSON paths without the array notation are allowed.
LabelSelectorPath string `json:“labelSelectorPath,omitempty”`

This comment has been minimized.

Copy link
@liggitt

liggitt Aug 15, 2017

Member

Define behavior if the specified paths do not exist or do not contain valid data (non-integers for scale paths, non-selector data for the selector path, etc)

This comment has been minimized.

Copy link
@sttts

sttts Aug 15, 2017

Contributor

"do not exist" is described below. Invalid data after the path has been changed in a CRD (can it be changed?) must be added.

This comment has been minimized.

Copy link
@nikhita

nikhita Aug 17, 2017

Author Member

Added.

### Feature Gate

The `SubResources` field in `CustomResourceDefinitionSpec` will be gated under the `CustomResourceSubResources` alpha feature gate.
If the gate is not open, the value of the new field within `CustomResourceDefinitionSpec` is rejected on creation and updates of CRDs.

This comment has been minimized.

Copy link
@liggitt

liggitt Aug 15, 2017

Member

Additionally, if the gate is off, the alpha fields should be dropped from incoming requests (see #869 (comment))

This comment has been minimized.

Copy link
@sttts

sttts Aug 15, 2017

Contributor

This was not clear from existing implementation and the api convention text. Thanks for clarifying. This matches my understanding.

This comment has been minimized.

Copy link
@nikhita

nikhita Aug 17, 2017

Author Member

Updated.


### Feature Gate

The `SubResources` field in `CustomResourceDefinitionSpec` will be gated under the `CustomResourceSubResources` alpha feature gate.

This comment has been minimized.

Copy link
@liggitt

liggitt Aug 15, 2017

Member

nit: I think I'd expect this to be "Subresources", since we generally say "subresources", not "sub resources" or "sub-resources"

This comment has been minimized.

Copy link
@enisoc

enisoc Aug 15, 2017

Member

I thought so too, until I found that we use SubResource in both client-go and as a field in AdmissionReviewSpec. Should we be consistent with English, or with the rest of k8s?

This comment has been minimized.

Copy link
@sttts

sttts Oct 26, 2017

Contributor

If nobody speaks up, I propose to use what we have here now. @liggitt last chance to articulate a strong opinion.

This comment has been minimized.

Copy link
@liggitt

liggitt Oct 26, 2017

Member

no strong opinion

- The main resource endpoint will ignore all changes in the specified status subpath.
(note: it will **not** reject requests which try to change the status, following the existing semantics of other resources).

- The `.metadata.generation` field is updated if and only if the value at the specified `SpecPath` (e.g. `.spec`) changes.

This comment has been minimized.

Copy link
@liggitt

liggitt Aug 15, 2017

Member

Does it prevent changing generation if spec does not change?

This comment has been minimized.

Copy link
@sttts

sttts Aug 15, 2017

Contributor

The understanding is that the update endpoint increases the generation iff it sees a difference in the spec. What is the expected behaviour about posting an object with modified Generation? Currently the proposal says that this value is overridden by the strategy.

This comment has been minimized.

Copy link
@kargakis

kargakis Aug 16, 2017

Member

What is the expected behaviour about posting an object with modified Generation?

For core objects, we ignore the change.

This comment has been minimized.

Copy link
@sttts

sttts Aug 17, 2017

Contributor

@kargakis thanks for clarification. So this means: ignore = we reset it to the old value if spec is unchanged.

In other words, metadata.Generation is completely uncontrollable for the controllers. This is probably a good thing for consistency.

This comment has been minimized.

Copy link
@nikhita

nikhita Aug 17, 2017

Author Member

For core objects, we ignore the change.

@kargakis Maybe missing something here but I can't find any place where we reset metadata.Generation to the old value if spec is unchanged. One example: https://github.com/kubernetes/kubernetes/blob/master/pkg/registry/core/replicationcontroller/strategy.go#L70.

This comment has been minimized.

Copy link
@kargakis

kargakis Aug 17, 2017

Member

We restore to the old generation one level prior to calling the resource-specific rest handlers in https://github.com/kubernetes/apiserver/blob/149fc2228647cea28b0670c240ec582e985e8eda/pkg/registry/rest/update.go#L100

This comment has been minimized.

Copy link
@nikhita

nikhita Aug 17, 2017

Author Member

Thanks, @kargakis. Updated.

// label query over pods that should match the replicas count.
// More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors
// +optional
Selector *metav1.LabelSelector `json:"selector"`

This comment has been minimized.

Copy link
@enisoc

enisoc Aug 15, 2017

Member

In the external (versioned) structs, all of apps.ScaleStatus, extensions.ScaleStatus, and autoscaling.ScaleStatus use a string (encoded as a label query) for the selector. The internal structs vary: extensions.ScaleStatus uses *metav1.LabelSelector internally, but autoscaling.ScaleStatus keeps it as a string.

We should make this consistent, although I don't know with what.

This comment has been minimized.

Copy link
@sttts

sttts Aug 16, 2017

Contributor

Right. We did not really specify whether we describe the internal or external types. But the way to go for the selector in the external types is to use a string. Good catch. @nikhita can you update that?

This comment has been minimized.

Copy link
@nikhita

nikhita Aug 16, 2017

Author Member

Updated. Thanks, @enisoc.

#### Status

The status endpoint of a CustomResource receives a full CR object. Changes outside of the status path are ignored.
For validation everything outside of the JSON Path `StatusPath` can be reset to the existing values of the CustomResource. Then the JSON Schema presented in the CRD is validated against the whole object.

This comment has been minimized.

Copy link
@enisoc

enisoc Aug 15, 2017

Member

It will still pass along the ResourceVersion from the update request to be verified, right? We need to be careful not to overwrite it with the old value.

This comment has been minimized.

Copy link
@nikhita

nikhita Aug 16, 2017

Author Member

Yes, it will use the new ResourceVersion value. 👍

This comment has been minimized.

Copy link
@sttts

sttts Aug 16, 2017

Contributor

For the validation step the resource version is not important. It will play a role when sending the update object to the storage layer. For that the described semantics are applied, i.e. the status ResourceVersion is used in order to be consistent with optimistic consistency of the whole object.

This comment has been minimized.

Copy link
@sdminonne

sdminonne Aug 16, 2017

Contributor

/sub

@nikhita nikhita force-pushed the nikhita:customresources-subresources branch 2 times, most recently from 1de0b1d to 7c9fcde Aug 16, 2017


```go
type CustomResourceDefinitionSpec struct {
Group string

This comment has been minimized.

Copy link
@kargakis

kargakis Aug 16, 2017

Member

Can you omit all existing fields?

This comment has been minimized.

Copy link
@sttts

sttts Aug 17, 2017

Contributor

Just some ..., yes.

This comment has been minimized.

Copy link
@nikhita

nikhita Aug 17, 2017

Author Member

Done.

@nikhita nikhita force-pushed the nikhita:customresources-subresources branch from 7c9fcde to 567081d Aug 17, 2017

The status endpoint of a CustomResource receives a full CR object. Changes outside of the status path are ignored.
For validation everything outside of the JSON Path `StatusPath` can be reset to the existing values of the CustomResource. Then the JSON Schema presented in the CRD is validated against the whole object.

Note: one could extract the status suboject, filter the JSON schema by rules applied to the status and then validate the status subobject only.

This comment has been minimized.

Copy link
@ash2k

ash2k Aug 17, 2017

Member

Typo in suboject

@nikhita nikhita force-pushed the nikhita:customresources-subresources branch from 567081d to ff5d1de Aug 17, 2017

SpecReplicasPath string `json:“specReplicasPath,omitempty”`
// optional, e.g. “.status.replicas”.
// Only JSON paths without the array notation are allowed.
StatusReplicasPath string `json:“statusReplicasPath,omitempty”`

This comment has been minimized.

Copy link
@deads2k

deads2k Aug 18, 2017

Contributor

Is the belief that we are too late to be prescriptive?

This comment has been minimized.

Copy link
@sttts

sttts Aug 18, 2017

Contributor

@enisoc ^^, you had the argument that we have plenty of paths already today in existing resources. Moreover, there might be a number of existing TPR/CRD users who have chosen their schema. By being prescriptive we make it impossible for them to opt-in.

This comment has been minimized.

Copy link
@enisoc

enisoc Aug 18, 2017

Member

My argument was that the purpose of the /scale subresource, as far as I know, is to abstract away the knowledge of which field a given object considers to represent its "scale". If we wanted to go the route of a convention for field names, like it must be .status.replicas, then there would be no need for a subresource. You could just look directly at .status.replicas.

This is in contrast to the /status subresource, which I argued ought to be restricted to the convention of .spec and .status field names. The /status endpoint returns the whole object, not a sub-field or a new type like /scale does, and so the only way the caller knows which part is the status is by agreeing on the conventional field name. Similarly, the caller needs to agree that .spec is the field name that triggers .metadata.generation to be incremented and so on.

Because /scale instead returns a separate, abstracted type, there is no need to agree on the underlying field names.

This comment has been minimized.

Copy link
@sttts

sttts Aug 21, 2017

Contributor

@deads2k looks like your comment is placed in the wrong thread. Can you elaborate what you mean worth fixing?

This comment has been minimized.

Copy link
@sttts

sttts Oct 4, 2017

Contributor

As there are CRDs out in the wild which have chosen a certain replicas path, I still think we should support abritrary JSON paths for the replicas fields. @deads2k is this something you can live with?

In contrast, I agree that for .spec and .status we should prescribe those paths. I would expect that 99% have users have chosen those anyway.

This comment has been minimized.

Copy link
@deads2k

deads2k Oct 4, 2017

Contributor

@deads2k is this something you can live with?

I can live with arbitrary json paths for the replicas fields.

This comment has been minimized.

Copy link
@bgrant0607

bgrant0607 Oct 6, 2017

Member

I am not ok with arbitrary paths.

This comment has been minimized.

Copy link
@sttts

sttts Oct 6, 2017

Contributor

@bgrant0607 I don't care much for new CRDs. Forcing new users to use a certain replicas field path is fine. But with dropping the ability of custom paths IMO we fail on goal 4: transition of existing CRDs. If we accept that, fixed paths are fine.

}
// The following is the payload to send over the wire for /scale. It happens
// to be defined here in apiextensions.k8s.io because we don’t have a global

This comment has been minimized.

Copy link
@deads2k

deads2k Aug 18, 2017

Contributor

And I don't think that we should. If polymorphic endpoints are to succeed, they need to be able to exist outside of meta.k8s.io.

This comment has been minimized.

Copy link
@sttts

sttts Aug 18, 2017

Contributor

Not a decission we have to make here now. I hope until implementation this topic is solved.

#### Status

The status endpoint of a CustomResource receives a full CR object. Changes outside of the status path are ignored.
For validation everything outside of the JSON Path `StatusPath` can be reset to the existing values of the CustomResource. Then the JSON Schema presented in the CRD is validated against the whole object.

This comment has been minimized.

Copy link
@deads2k

deads2k Aug 18, 2017

Contributor

I'd recommend skipping validation on the spec. If you tighten validation, you don't generally want to reject callers which are writing "this is the state of the world".

This comment has been minimized.

Copy link
@sttts

sttts Aug 18, 2017

Contributor

That's the effect. It's just an implementation detail with the same semantics as "checking only the status".

This comment has been minimized.

Copy link
@enisoc

enisoc Aug 18, 2017

Member

I think @deads2k has a point. Consider:

  1. Create object.
  2. Tighten validation such that the existing object Spec no longer passes.
  3. Controller tries to update object Status, perhaps to indicate the object is invalid.

I'm not sure if this is worth fixing though, if all it would do is allow Status to be updated when the Spec is invalid. Users could still hit the same problem by tightening validation on Status itself.

This comment has been minimized.

Copy link
@sttts

sttts Aug 21, 2017

Contributor

We discussed that before (in the proposal I think) and concluded that the user is responsible to ensure that old objects validate. There is not much we can do. I don't see this case here any different.

This comment has been minimized.

Copy link
@deads2k

deads2k Aug 23, 2017

Contributor

We discussed that before (in the proposal I think) and concluded that the user is responsible to ensure that old objects validate. There is not much we can do. I don't see this case here any different.

There's no way to identify all the validation rules under status. in the jsonpath description and only run those?

This comment has been minimized.

Copy link
@sttts

sttts Aug 23, 2017

Contributor

At the very least for OneOf of JSON schema it's not that obvious how to cut down a schema in general. If I don't miss anything, the other JSON schema constructs are at least monotonic, i.e. if we remove rules from the schema it will become weaker. But still it needs some deep thought to implement such a projection to a JSON path. My current gut feeling is that with the exception of OneOf it's doable. My argument above was that it's not worth it to investigate or implementation this if we can test the whole object after restoring the spec and metadata.

This comment has been minimized.

Copy link
@lpabon

lpabon Aug 29, 2017

I think to avoid confusion, the paper should provide an example use case work flow, which shows how the data will be interpreted.

This comment has been minimized.

Copy link
@deads2k

deads2k Oct 4, 2017

Contributor

I still feel pretty strongly about this. Looks like one of my previous comments landed in the wrong thread:

I'm not sure if this is worth fixing though, if all it would do is allow Status to be updated when the Spec is invalid. Users could still hit the same problem by tightening validation on Status itself.

It's been a big deal for first-class resources during upgrades. It's worth fixing.

My argument above was that it's not worth it to investigate or implementation this if we can test the whole object after restoring the spec and metadata.

That doesn't do the same thing. If validation is tightened on spec, you can still fail status updates on it. I don't even mind the validation running as long we never report or fail on bad spec.

This comment has been minimized.

Copy link
@nikhita

nikhita Oct 26, 2017

Author Member

@deads2k Updated the proposal. Now we validate only against the status.

- The value at the optional JSON Path `StatusReplicasPath` (e.g. `.status.replicas`) is validated to be a an integer number if it exists (i.e. this can be empty).

- The value at the optional JSON Path `LabelSelectorPath` (e.g. `.spec.labelSelector`) is validated to be a valid label selector if it exists (i.e. this can be empty).
If it is invalid, an empty `LabelSelector` is used.

This comment has been minimized.

Copy link
@deads2k

deads2k Aug 18, 2017

Contributor

Why does this field allow invalid values?

This comment has been minimized.

Copy link
@sttts

sttts Aug 18, 2017

Contributor

By changing (or setting for the first time) LabelSelectorPath in an existing CRD. Then we have no chance to validate this post-creation in all CRs.

This comment has been minimized.

Copy link
@sttts

sttts Aug 18, 2017

Contributor

Use-case: opt-in to subresources later-on.

This comment has been minimized.

Copy link
@deads2k

deads2k Oct 26, 2017

Contributor

By changing (or setting for the first time) LabelSelectorPath in an existing CRD. Then we have no chance to validate this post-creation in all CRs.

In such a case, why shouldn't we return an error from the API until the data is correct? The data isn't doing what the user creating it thought it should. An empty label selector doesn't necessarily mean the same thing.

This comment has been minimized.

Copy link
@sttts

sttts Oct 26, 2017

Contributor

"If it is invalid, an empty LabelSelector is used." means that a label selector pointing to something not-existing in the CR maps to the default value of an empty label selector.

In general, we don't want to synchronously check all CRs in etcd of a given GV against changes in these fields. I.e. we don't want to implement a validation which takes a snapshot of all CR of a CRD and verifies them. First, there can be many, second to make that consistent we have to disable updates on CRs during that time. So the choice here is to have a good enough fall-back that it does not break the CRD user's neck, but give him the responsibility to define consistent things for his CRs. Defining a CRD with subresources is nothing you do every day or even ad-hoc.

This comment has been minimized.

Copy link
@sttts

sttts Oct 26, 2017

Contributor

We have to clarify:

  • a given path is empty. This maps to the default value (0 for the replicas and empty for the label selector)
  • a given path exists, but it does not validate (no integer or no label selector). In that case we can return an error.

The error code, as this is in fact a server caused error, should be in the 5xx range. But also 422 (and strict following the spec also 403) are good candidates.

@nikhita nikhita referenced this pull request Aug 18, 2017
13 of 13 tasks complete
#### Status

The status endpoint of a CustomResource receives a full CR object. Changes outside of the status path are ignored.
For validation everything outside of the JSON Path `StatusPath` can be reset to the existing values of the CustomResource. Then the JSON Schema presented in the CRD is validated against the whole object.

This comment has been minimized.

Copy link
@lpabon

lpabon Aug 29, 2017

I think to avoid confusion, the paper should provide an example use case work flow, which shows how the data will be interpreted.


#### Status Replicas Behavior

As only the `scale.Spec.Replicas` field is to be written to by the CR user, the user-provided controller (not any generic CRD controller) counts its children and then updates the controlled object by writing to the `/status` subresource, i.e. the `scale.Status.Replicas` field is read-only.

This comment has been minimized.

Copy link
@lpabon

lpabon Aug 29, 2017

@nikhita I'm new to this part. Why is it that .spec.replicas is read-only?

This comment has been minimized.

Copy link
@nikhita

nikhita Aug 29, 2017

Author Member

@nikhita I'm new to this part. Why is it that .spec.replicas is read-only?

we cannot write to the spec or the status of the CR using the /scale subresource. Additionally,

  • .spec.replicas can be written to by the CR user by updating the object.
  • .status.replicas can be written to by the CR controller using /status subresource.

This comment has been minimized.

Copy link
@liggitt

liggitt Aug 29, 2017

Member

typically, spec.replicas is mutable via the scale subresource. that is how the HPA controller would interact with this resource

This comment has been minimized.

Copy link
@sttts

sttts Aug 29, 2017

Contributor

To clarify and sum up:

  • values in / .spec.replicas (the main resource) and /scale .spec.replicas are the same value. Both can be written if RBAC rules allow that.
  • values /status .status.replicas and /scale .status.replicas are the same value. The first can be written if RBAC rules allow that, the second can't.

In practice an autoscaling controller might only write /scale .spec.replicas (and RBAC will be restrictive to only allow that). A user might do kubectl edit and write to / .spec.replicas, or he uses kubectl scale which writes to /scale .spec.replicas. A controller will write to /status .status.replicas, but not to / .spec.replicas or /scale .spec.replicas.

@rjeberhard

This comment has been minimized.

Copy link

commented Sep 6, 2017

We are working on an operator and a CRD. With this proposed change and our creating the necessary SubResource, would we then be able to give administrators the option of scaling with HPA?

@enisoc

This comment has been minimized.

Copy link
Member

commented Sep 8, 2017

@rjeberhard wrote:

With this proposed change and our creating the necessary SubResource, would we then be able to give administrators the option of scaling with HPA?

Yup, that's the plan. There is another piece in addition to this that also needs to fall into place, though. Currently HPA can only work with things in the extensions API group, but they are planning to make it fully generic so it can work with any object that implements the Scale subresource according to the convention that we're proposing to support here.

@deads2k

This comment has been minimized.

Copy link
Contributor

commented Oct 26, 2017

Needs a few updates, but nothing fundamental. Go ahead and update the features repo to list this for 1.9. Update the doc for

  1. verbs on subresources
  2. GVK for scale
  3. clarify invalid labelselector handling.
@nikhita

This comment has been minimized.

Copy link
Member Author

commented Oct 26, 2017

@deads2k Updated the proposal. If this lgty, I'll squash the commits.

Also, the proposal needs to be moved to contributors/design-proposals/api-machinery since all the proposals were re-organized recently (this will not show the diffs/comments anymore, so waiting for lgtm).

// object sent as the payload for /scale. It allows transition
// to future versions easily.
// Today only autoscaling/v1 is allowed.
ScaleGroupVersion schema.GroupVersion `json:"groupVersion"`

This comment has been minimized.

Copy link
@deads2k

deads2k Oct 27, 2017

Contributor

this is a logical type, but don't actually embed the type here

This comment has been minimized.

Copy link
@sttts

sttts Oct 27, 2017

Contributor

The external type will have a string, the internal the logical one.

This comment has been minimized.

Copy link
@deads2k

deads2k Oct 27, 2017

Contributor

The external type will have a string, the internal the logical one.

Ok.

@deads2k

This comment has been minimized.

Copy link
Contributor

commented Oct 27, 2017

lgtm. squash for merge.

@nikhita nikhita force-pushed the nikhita:customresources-subresources branch from 644b75c to f38bf97 Oct 27, 2017

@nikhita

This comment has been minimized.

Copy link
Member Author

commented Oct 27, 2017

@deads2k Done.

@deads2k

This comment has been minimized.

Copy link
Contributor

commented Oct 27, 2017

/lgtm

@k8s-github-robot

This comment has been minimized.

Copy link
Contributor

commented Oct 27, 2017

/test all [submit-queue is verifying that this PR is safe to merge]

@k8s-github-robot

This comment has been minimized.

Copy link
Contributor

commented Oct 27, 2017

Automatic merge from submit-queue.

@k8s-github-robot k8s-github-robot merged commit ca27d20 into kubernetes:master Oct 27, 2017

3 checks passed

Submit Queue Running github e2e tests a second time.
Details
cla/linuxfoundation nikhita authorized
Details
pull-community-verify Job succeeded.
Details
k8s-github-robot pushed a commit to kubernetes/kubernetes that referenced this pull request Feb 22, 2018
Kubernetes Submit Queue
Merge pull request #55168 from nikhita/customresources-subresources
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

apiextensions: add subresources for custom resources

Fixes #38113
Fixes #58778

**Related**:
- Proposal: kubernetes/community#913
- For custom resources to work with `kubectl scale`: #58283

**Add types**:

- Add `CustomResourceSubResources` type to CRD.
    - Fix proto generation for `CustomResourceSubResourceStatus`: #55970.
- Add feature gate for `CustomResourceSubResources`.
    - Update CRD strategy: if feature gate is disabled, this feature is dropped (i.e. set to `nil`).
- Add validation for `CustomResourceSubResources`:
    - `SpecReplicasPath` should not be empty and should be a valid json path under `.spec`. If there is no value under the given path in the CustomResource, the `/scale` subresource will return an error on GET.
    - `StatusReplicasPath` should not be empty and should be a valid json path under `.status`. If there is no value under the given path in the CustomResource, the status replica value in the /scale subresource will default to 0.
    - If present, `LabelSelectorPath` should be a valid json path. If there is no value under `LabelSelectorPath` in the CustomResource, the status label selector value in the `/scale` subresource will default to the empty string.
    - `ScaleGroupVersion` should be `autoscaling/v1`.
    - If `CustomResourceSubResources` is enabled, only `properties` is allowed under the root schema for CRD validation.

**Add status and scale subresources**:

- Use helper functions from `apimachinery/pkg/apis/meta/v1/unstructured/helpers.go`.
    - Improve error handling: #56563, #58215.
- Introduce Registry interface for storage.
- Update storage:
    - Introduce `CustomResourceStorage` which acts as storage for the custom resource and its status and scale subresources. Note: storage for status and scale is only enabled when the feature gate is enabled _and_ the respective fields are enabled in the CRD.
    - Introduce `StatusREST` and its `New()`, `Get()` and `Update()` methods.
    - Introduce `ScaleREST` and its `New()`, `Get()` and `Update()` methods.
        - Get and Update use the json paths from the CRD and use it to return an `autoscaling/v1.Scale` object.
- Update strategy:
    - In `PrepareForCreate`,
         - Clear `.status`.
         - Set `.metadata.generation` = 1
    - In `PrepareForUpdate`,
         - Do not update `.status`.
             - If both the old and new objects have `.status` and it is changed, set it back to its old value.
             - If the old object has a `.status` but the new object doesn't, set it to the old value.
             - If old object did not have a `.status` but the new object does, delete it.
         - Increment generation if spec changes i.e. in the following cases:
             - If both the old and new objects had `.spec` and it changed.
             - If the old object did not have `.spec` but the new object does.
             - If the old object had a `.spec` but the new object doesn't.
     - In `Validate` and `ValidateUpdate`,
        - ensure that values at `specReplicasPath` and `statusReplicasPath` are >=0 and < maxInt32.
        - make sure there are no errors in getting the value at all the paths.
    - Introduce `statusStrategy` with its methods.
        - In `PrepareForUpdate`:
            - Do not update `.spec`.
                - If both the old and new objects have `.spec` and it is changed, set it back to its old value.
                - If the old object has a `.spec` but the new object doesn't, set it to the old value.
                - If old object did not have a `.spec` but the new object does, delete it.
             - Do not update `.metadata`.
        - In `ValidateStatusUpdate`:
            - For CRD validation, validate only under `.status`.
            - Validate value at `statusReplicasPath` as above. If `labelSelectorPath` is a path under `.status`, then validate it as well.
- Plug into the custom resource handler:
    - Store all three storage - customResource, status and scale in `crdInfo`.
    - Use the storage as per the subresource in the request.
    - Use the validator as per the subresource (for status, only use the schema for `status`, if present).
    - Serve the endpoint as per the subresource - see `serveResource`, `serveStatus` and `serveScale`.
- Update discovery by adding the `/status` and `/scale` resources, if enabled.

**Add tests**:

- Add unit tests in `etcd_test.go`.
- Add integration tests.
    - In `subresources_test.go`, use the [polymporphic scale client](https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/client-go/scale) to get and update `Scale`.
    -  Add a test to check everything works fine with yaml in `yaml_test.go`.

**Release note**:

```release-note
`/status` and `/scale` subresources are added for custom resources.
```
k8s-publishing-bot added a commit to kubernetes/client-go that referenced this pull request Feb 27, 2018
Merge pull request #55168 from nikhita/customresources-subresources
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

apiextensions: add subresources for custom resources

Fixes #38113
Fixes #58778

**Related**:
- Proposal: kubernetes/community#913
- For custom resources to work with `kubectl scale`: kubernetes/kubernetes#58283

**Add types**:

- Add `CustomResourceSubResources` type to CRD.
    - Fix proto generation for `CustomResourceSubResourceStatus`: kubernetes/kubernetes#55970.
- Add feature gate for `CustomResourceSubResources`.
    - Update CRD strategy: if feature gate is disabled, this feature is dropped (i.e. set to `nil`).
- Add validation for `CustomResourceSubResources`:
    - `SpecReplicasPath` should not be empty and should be a valid json path under `.spec`. If there is no value under the given path in the CustomResource, the `/scale` subresource will return an error on GET.
    - `StatusReplicasPath` should not be empty and should be a valid json path under `.status`. If there is no value under the given path in the CustomResource, the status replica value in the /scale subresource will default to 0.
    - If present, `LabelSelectorPath` should be a valid json path. If there is no value under `LabelSelectorPath` in the CustomResource, the status label selector value in the `/scale` subresource will default to the empty string.
    - `ScaleGroupVersion` should be `autoscaling/v1`.
    - If `CustomResourceSubResources` is enabled, only `properties` is allowed under the root schema for CRD validation.

**Add status and scale subresources**:

- Use helper functions from `apimachinery/pkg/apis/meta/v1/unstructured/helpers.go`.
    - Improve error handling: kubernetes/kubernetes#56563, kubernetes/kubernetes#58215.
- Introduce Registry interface for storage.
- Update storage:
    - Introduce `CustomResourceStorage` which acts as storage for the custom resource and its status and scale subresources. Note: storage for status and scale is only enabled when the feature gate is enabled _and_ the respective fields are enabled in the CRD.
    - Introduce `StatusREST` and its `New()`, `Get()` and `Update()` methods.
    - Introduce `ScaleREST` and its `New()`, `Get()` and `Update()` methods.
        - Get and Update use the json paths from the CRD and use it to return an `autoscaling/v1.Scale` object.
- Update strategy:
    - In `PrepareForCreate`,
         - Clear `.status`.
         - Set `.metadata.generation` = 1
    - In `PrepareForUpdate`,
         - Do not update `.status`.
             - If both the old and new objects have `.status` and it is changed, set it back to its old value.
             - If the old object has a `.status` but the new object doesn't, set it to the old value.
             - If old object did not have a `.status` but the new object does, delete it.
         - Increment generation if spec changes i.e. in the following cases:
             - If both the old and new objects had `.spec` and it changed.
             - If the old object did not have `.spec` but the new object does.
             - If the old object had a `.spec` but the new object doesn't.
     - In `Validate` and `ValidateUpdate`,
        - ensure that values at `specReplicasPath` and `statusReplicasPath` are >=0 and < maxInt32.
        - make sure there are no errors in getting the value at all the paths.
    - Introduce `statusStrategy` with its methods.
        - In `PrepareForUpdate`:
            - Do not update `.spec`.
                - If both the old and new objects have `.spec` and it is changed, set it back to its old value.
                - If the old object has a `.spec` but the new object doesn't, set it to the old value.
                - If old object did not have a `.spec` but the new object does, delete it.
             - Do not update `.metadata`.
        - In `ValidateStatusUpdate`:
            - For CRD validation, validate only under `.status`.
            - Validate value at `statusReplicasPath` as above. If `labelSelectorPath` is a path under `.status`, then validate it as well.
- Plug into the custom resource handler:
    - Store all three storage - customResource, status and scale in `crdInfo`.
    - Use the storage as per the subresource in the request.
    - Use the validator as per the subresource (for status, only use the schema for `status`, if present).
    - Serve the endpoint as per the subresource - see `serveResource`, `serveStatus` and `serveScale`.
- Update discovery by adding the `/status` and `/scale` resources, if enabled.

**Add tests**:

- Add unit tests in `etcd_test.go`.
- Add integration tests.
    - In `subresources_test.go`, use the [polymporphic scale client](https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/client-go/scale) to get and update `Scale`.
    -  Add a test to check everything works fine with yaml in `yaml_test.go`.

**Release note**:

```release-note
`/status` and `/scale` subresources are added for custom resources.
```

Kubernetes-commit: 6e856480c05424b5cd2cfcbec692a801b856ccb2
k8s-publishing-bot added a commit to kubernetes/client-go that referenced this pull request Feb 27, 2018
Merge pull request #55168 from nikhita/customresources-subresources
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

apiextensions: add subresources for custom resources

Fixes #38113
Fixes #58778

**Related**:
- Proposal: kubernetes/community#913
- For custom resources to work with `kubectl scale`: kubernetes/kubernetes#58283

**Add types**:

- Add `CustomResourceSubResources` type to CRD.
    - Fix proto generation for `CustomResourceSubResourceStatus`: kubernetes/kubernetes#55970.
- Add feature gate for `CustomResourceSubResources`.
    - Update CRD strategy: if feature gate is disabled, this feature is dropped (i.e. set to `nil`).
- Add validation for `CustomResourceSubResources`:
    - `SpecReplicasPath` should not be empty and should be a valid json path under `.spec`. If there is no value under the given path in the CustomResource, the `/scale` subresource will return an error on GET.
    - `StatusReplicasPath` should not be empty and should be a valid json path under `.status`. If there is no value under the given path in the CustomResource, the status replica value in the /scale subresource will default to 0.
    - If present, `LabelSelectorPath` should be a valid json path. If there is no value under `LabelSelectorPath` in the CustomResource, the status label selector value in the `/scale` subresource will default to the empty string.
    - `ScaleGroupVersion` should be `autoscaling/v1`.
    - If `CustomResourceSubResources` is enabled, only `properties` is allowed under the root schema for CRD validation.

**Add status and scale subresources**:

- Use helper functions from `apimachinery/pkg/apis/meta/v1/unstructured/helpers.go`.
    - Improve error handling: kubernetes/kubernetes#56563, kubernetes/kubernetes#58215.
- Introduce Registry interface for storage.
- Update storage:
    - Introduce `CustomResourceStorage` which acts as storage for the custom resource and its status and scale subresources. Note: storage for status and scale is only enabled when the feature gate is enabled _and_ the respective fields are enabled in the CRD.
    - Introduce `StatusREST` and its `New()`, `Get()` and `Update()` methods.
    - Introduce `ScaleREST` and its `New()`, `Get()` and `Update()` methods.
        - Get and Update use the json paths from the CRD and use it to return an `autoscaling/v1.Scale` object.
- Update strategy:
    - In `PrepareForCreate`,
         - Clear `.status`.
         - Set `.metadata.generation` = 1
    - In `PrepareForUpdate`,
         - Do not update `.status`.
             - If both the old and new objects have `.status` and it is changed, set it back to its old value.
             - If the old object has a `.status` but the new object doesn't, set it to the old value.
             - If old object did not have a `.status` but the new object does, delete it.
         - Increment generation if spec changes i.e. in the following cases:
             - If both the old and new objects had `.spec` and it changed.
             - If the old object did not have `.spec` but the new object does.
             - If the old object had a `.spec` but the new object doesn't.
     - In `Validate` and `ValidateUpdate`,
        - ensure that values at `specReplicasPath` and `statusReplicasPath` are >=0 and < maxInt32.
        - make sure there are no errors in getting the value at all the paths.
    - Introduce `statusStrategy` with its methods.
        - In `PrepareForUpdate`:
            - Do not update `.spec`.
                - If both the old and new objects have `.spec` and it is changed, set it back to its old value.
                - If the old object has a `.spec` but the new object doesn't, set it to the old value.
                - If old object did not have a `.spec` but the new object does, delete it.
             - Do not update `.metadata`.
        - In `ValidateStatusUpdate`:
            - For CRD validation, validate only under `.status`.
            - Validate value at `statusReplicasPath` as above. If `labelSelectorPath` is a path under `.status`, then validate it as well.
- Plug into the custom resource handler:
    - Store all three storage - customResource, status and scale in `crdInfo`.
    - Use the storage as per the subresource in the request.
    - Use the validator as per the subresource (for status, only use the schema for `status`, if present).
    - Serve the endpoint as per the subresource - see `serveResource`, `serveStatus` and `serveScale`.
- Update discovery by adding the `/status` and `/scale` resources, if enabled.

**Add tests**:

- Add unit tests in `etcd_test.go`.
- Add integration tests.
    - In `subresources_test.go`, use the [polymporphic scale client](https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/client-go/scale) to get and update `Scale`.
    -  Add a test to check everything works fine with yaml in `yaml_test.go`.

**Release note**:

```release-note
`/status` and `/scale` subresources are added for custom resources.
```

Kubernetes-commit: 6e856480c05424b5cd2cfcbec692a801b856ccb2
k8s-publishing-bot added a commit to kubernetes/apiextensions-apiserver that referenced this pull request Feb 27, 2018
Merge pull request #55168 from nikhita/customresources-subresources
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

apiextensions: add subresources for custom resources

Fixes #38113
Fixes #58778

**Related**:
- Proposal: kubernetes/community#913
- For custom resources to work with `kubectl scale`: kubernetes/kubernetes#58283

**Add types**:

- Add `CustomResourceSubResources` type to CRD.
    - Fix proto generation for `CustomResourceSubResourceStatus`: kubernetes/kubernetes#55970.
- Add feature gate for `CustomResourceSubResources`.
    - Update CRD strategy: if feature gate is disabled, this feature is dropped (i.e. set to `nil`).
- Add validation for `CustomResourceSubResources`:
    - `SpecReplicasPath` should not be empty and should be a valid json path under `.spec`. If there is no value under the given path in the CustomResource, the `/scale` subresource will return an error on GET.
    - `StatusReplicasPath` should not be empty and should be a valid json path under `.status`. If there is no value under the given path in the CustomResource, the status replica value in the /scale subresource will default to 0.
    - If present, `LabelSelectorPath` should be a valid json path. If there is no value under `LabelSelectorPath` in the CustomResource, the status label selector value in the `/scale` subresource will default to the empty string.
    - `ScaleGroupVersion` should be `autoscaling/v1`.
    - If `CustomResourceSubResources` is enabled, only `properties` is allowed under the root schema for CRD validation.

**Add status and scale subresources**:

- Use helper functions from `apimachinery/pkg/apis/meta/v1/unstructured/helpers.go`.
    - Improve error handling: kubernetes/kubernetes#56563, kubernetes/kubernetes#58215.
- Introduce Registry interface for storage.
- Update storage:
    - Introduce `CustomResourceStorage` which acts as storage for the custom resource and its status and scale subresources. Note: storage for status and scale is only enabled when the feature gate is enabled _and_ the respective fields are enabled in the CRD.
    - Introduce `StatusREST` and its `New()`, `Get()` and `Update()` methods.
    - Introduce `ScaleREST` and its `New()`, `Get()` and `Update()` methods.
        - Get and Update use the json paths from the CRD and use it to return an `autoscaling/v1.Scale` object.
- Update strategy:
    - In `PrepareForCreate`,
         - Clear `.status`.
         - Set `.metadata.generation` = 1
    - In `PrepareForUpdate`,
         - Do not update `.status`.
             - If both the old and new objects have `.status` and it is changed, set it back to its old value.
             - If the old object has a `.status` but the new object doesn't, set it to the old value.
             - If old object did not have a `.status` but the new object does, delete it.
         - Increment generation if spec changes i.e. in the following cases:
             - If both the old and new objects had `.spec` and it changed.
             - If the old object did not have `.spec` but the new object does.
             - If the old object had a `.spec` but the new object doesn't.
     - In `Validate` and `ValidateUpdate`,
        - ensure that values at `specReplicasPath` and `statusReplicasPath` are >=0 and < maxInt32.
        - make sure there are no errors in getting the value at all the paths.
    - Introduce `statusStrategy` with its methods.
        - In `PrepareForUpdate`:
            - Do not update `.spec`.
                - If both the old and new objects have `.spec` and it is changed, set it back to its old value.
                - If the old object has a `.spec` but the new object doesn't, set it to the old value.
                - If old object did not have a `.spec` but the new object does, delete it.
             - Do not update `.metadata`.
        - In `ValidateStatusUpdate`:
            - For CRD validation, validate only under `.status`.
            - Validate value at `statusReplicasPath` as above. If `labelSelectorPath` is a path under `.status`, then validate it as well.
- Plug into the custom resource handler:
    - Store all three storage - customResource, status and scale in `crdInfo`.
    - Use the storage as per the subresource in the request.
    - Use the validator as per the subresource (for status, only use the schema for `status`, if present).
    - Serve the endpoint as per the subresource - see `serveResource`, `serveStatus` and `serveScale`.
- Update discovery by adding the `/status` and `/scale` resources, if enabled.

**Add tests**:

- Add unit tests in `etcd_test.go`.
- Add integration tests.
    - In `subresources_test.go`, use the [polymporphic scale client](https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/client-go/scale) to get and update `Scale`.
    -  Add a test to check everything works fine with yaml in `yaml_test.go`.

**Release note**:

```release-note
`/status` and `/scale` subresources are added for custom resources.
```

Kubernetes-commit: 6e856480c05424b5cd2cfcbec692a801b856ccb2
k8s-publishing-bot added a commit to kubernetes/apiextensions-apiserver that referenced this pull request Feb 27, 2018
Merge pull request #55168 from nikhita/customresources-subresources
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

apiextensions: add subresources for custom resources

Fixes #38113
Fixes #58778

**Related**:
- Proposal: kubernetes/community#913
- For custom resources to work with `kubectl scale`: kubernetes/kubernetes#58283

**Add types**:

- Add `CustomResourceSubResources` type to CRD.
    - Fix proto generation for `CustomResourceSubResourceStatus`: kubernetes/kubernetes#55970.
- Add feature gate for `CustomResourceSubResources`.
    - Update CRD strategy: if feature gate is disabled, this feature is dropped (i.e. set to `nil`).
- Add validation for `CustomResourceSubResources`:
    - `SpecReplicasPath` should not be empty and should be a valid json path under `.spec`. If there is no value under the given path in the CustomResource, the `/scale` subresource will return an error on GET.
    - `StatusReplicasPath` should not be empty and should be a valid json path under `.status`. If there is no value under the given path in the CustomResource, the status replica value in the /scale subresource will default to 0.
    - If present, `LabelSelectorPath` should be a valid json path. If there is no value under `LabelSelectorPath` in the CustomResource, the status label selector value in the `/scale` subresource will default to the empty string.
    - `ScaleGroupVersion` should be `autoscaling/v1`.
    - If `CustomResourceSubResources` is enabled, only `properties` is allowed under the root schema for CRD validation.

**Add status and scale subresources**:

- Use helper functions from `apimachinery/pkg/apis/meta/v1/unstructured/helpers.go`.
    - Improve error handling: kubernetes/kubernetes#56563, kubernetes/kubernetes#58215.
- Introduce Registry interface for storage.
- Update storage:
    - Introduce `CustomResourceStorage` which acts as storage for the custom resource and its status and scale subresources. Note: storage for status and scale is only enabled when the feature gate is enabled _and_ the respective fields are enabled in the CRD.
    - Introduce `StatusREST` and its `New()`, `Get()` and `Update()` methods.
    - Introduce `ScaleREST` and its `New()`, `Get()` and `Update()` methods.
        - Get and Update use the json paths from the CRD and use it to return an `autoscaling/v1.Scale` object.
- Update strategy:
    - In `PrepareForCreate`,
         - Clear `.status`.
         - Set `.metadata.generation` = 1
    - In `PrepareForUpdate`,
         - Do not update `.status`.
             - If both the old and new objects have `.status` and it is changed, set it back to its old value.
             - If the old object has a `.status` but the new object doesn't, set it to the old value.
             - If old object did not have a `.status` but the new object does, delete it.
         - Increment generation if spec changes i.e. in the following cases:
             - If both the old and new objects had `.spec` and it changed.
             - If the old object did not have `.spec` but the new object does.
             - If the old object had a `.spec` but the new object doesn't.
     - In `Validate` and `ValidateUpdate`,
        - ensure that values at `specReplicasPath` and `statusReplicasPath` are >=0 and < maxInt32.
        - make sure there are no errors in getting the value at all the paths.
    - Introduce `statusStrategy` with its methods.
        - In `PrepareForUpdate`:
            - Do not update `.spec`.
                - If both the old and new objects have `.spec` and it is changed, set it back to its old value.
                - If the old object has a `.spec` but the new object doesn't, set it to the old value.
                - If old object did not have a `.spec` but the new object does, delete it.
             - Do not update `.metadata`.
        - In `ValidateStatusUpdate`:
            - For CRD validation, validate only under `.status`.
            - Validate value at `statusReplicasPath` as above. If `labelSelectorPath` is a path under `.status`, then validate it as well.
- Plug into the custom resource handler:
    - Store all three storage - customResource, status and scale in `crdInfo`.
    - Use the storage as per the subresource in the request.
    - Use the validator as per the subresource (for status, only use the schema for `status`, if present).
    - Serve the endpoint as per the subresource - see `serveResource`, `serveStatus` and `serveScale`.
- Update discovery by adding the `/status` and `/scale` resources, if enabled.

**Add tests**:

- Add unit tests in `etcd_test.go`.
- Add integration tests.
    - In `subresources_test.go`, use the [polymporphic scale client](https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/client-go/scale) to get and update `Scale`.
    -  Add a test to check everything works fine with yaml in `yaml_test.go`.

**Release note**:

```release-note
`/status` and `/scale` subresources are added for custom resources.
```

Kubernetes-commit: 6e856480c05424b5cd2cfcbec692a801b856ccb2
sttts pushed a commit to sttts/client-go that referenced this pull request Mar 1, 2018
Merge pull request #55168 from nikhita/customresources-subresources
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

apiextensions: add subresources for custom resources

Fixes #38113
Fixes #58778

**Related**:
- Proposal: kubernetes/community#913
- For custom resources to work with `kubectl scale`: kubernetes/kubernetes#58283

**Add types**:

- Add `CustomResourceSubResources` type to CRD.
    - Fix proto generation for `CustomResourceSubResourceStatus`: kubernetes/kubernetes#55970.
- Add feature gate for `CustomResourceSubResources`.
    - Update CRD strategy: if feature gate is disabled, this feature is dropped (i.e. set to `nil`).
- Add validation for `CustomResourceSubResources`:
    - `SpecReplicasPath` should not be empty and should be a valid json path under `.spec`. If there is no value under the given path in the CustomResource, the `/scale` subresource will return an error on GET.
    - `StatusReplicasPath` should not be empty and should be a valid json path under `.status`. If there is no value under the given path in the CustomResource, the status replica value in the /scale subresource will default to 0.
    - If present, `LabelSelectorPath` should be a valid json path. If there is no value under `LabelSelectorPath` in the CustomResource, the status label selector value in the `/scale` subresource will default to the empty string.
    - `ScaleGroupVersion` should be `autoscaling/v1`.
    - If `CustomResourceSubResources` is enabled, only `properties` is allowed under the root schema for CRD validation.

**Add status and scale subresources**:

- Use helper functions from `apimachinery/pkg/apis/meta/v1/unstructured/helpers.go`.
    - Improve error handling: kubernetes/kubernetes#56563, kubernetes/kubernetes#58215.
- Introduce Registry interface for storage.
- Update storage:
    - Introduce `CustomResourceStorage` which acts as storage for the custom resource and its status and scale subresources. Note: storage for status and scale is only enabled when the feature gate is enabled _and_ the respective fields are enabled in the CRD.
    - Introduce `StatusREST` and its `New()`, `Get()` and `Update()` methods.
    - Introduce `ScaleREST` and its `New()`, `Get()` and `Update()` methods.
        - Get and Update use the json paths from the CRD and use it to return an `autoscaling/v1.Scale` object.
- Update strategy:
    - In `PrepareForCreate`,
         - Clear `.status`.
         - Set `.metadata.generation` = 1
    - In `PrepareForUpdate`,
         - Do not update `.status`.
             - If both the old and new objects have `.status` and it is changed, set it back to its old value.
             - If the old object has a `.status` but the new object doesn't, set it to the old value.
             - If old object did not have a `.status` but the new object does, delete it.
         - Increment generation if spec changes i.e. in the following cases:
             - If both the old and new objects had `.spec` and it changed.
             - If the old object did not have `.spec` but the new object does.
             - If the old object had a `.spec` but the new object doesn't.
     - In `Validate` and `ValidateUpdate`,
        - ensure that values at `specReplicasPath` and `statusReplicasPath` are >=0 and < maxInt32.
        - make sure there are no errors in getting the value at all the paths.
    - Introduce `statusStrategy` with its methods.
        - In `PrepareForUpdate`:
            - Do not update `.spec`.
                - If both the old and new objects have `.spec` and it is changed, set it back to its old value.
                - If the old object has a `.spec` but the new object doesn't, set it to the old value.
                - If old object did not have a `.spec` but the new object does, delete it.
             - Do not update `.metadata`.
        - In `ValidateStatusUpdate`:
            - For CRD validation, validate only under `.status`.
            - Validate value at `statusReplicasPath` as above. If `labelSelectorPath` is a path under `.status`, then validate it as well.
- Plug into the custom resource handler:
    - Store all three storage - customResource, status and scale in `crdInfo`.
    - Use the storage as per the subresource in the request.
    - Use the validator as per the subresource (for status, only use the schema for `status`, if present).
    - Serve the endpoint as per the subresource - see `serveResource`, `serveStatus` and `serveScale`.
- Update discovery by adding the `/status` and `/scale` resources, if enabled.

**Add tests**:

- Add unit tests in `etcd_test.go`.
- Add integration tests.
    - In `subresources_test.go`, use the [polymporphic scale client](https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/client-go/scale) to get and update `Scale`.
    -  Add a test to check everything works fine with yaml in `yaml_test.go`.

**Release note**:

```release-note
`/status` and `/scale` subresources are added for custom resources.
```

Kubernetes-commit: 6e856480c05424b5cd2cfcbec692a801b856ccb2
sttts pushed a commit to sttts/client-go that referenced this pull request Mar 1, 2018
Merge pull request #55168 from nikhita/customresources-subresources
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

apiextensions: add subresources for custom resources

Fixes #38113
Fixes #58778

**Related**:
- Proposal: kubernetes/community#913
- For custom resources to work with `kubectl scale`: kubernetes/kubernetes#58283

**Add types**:

- Add `CustomResourceSubResources` type to CRD.
    - Fix proto generation for `CustomResourceSubResourceStatus`: kubernetes/kubernetes#55970.
- Add feature gate for `CustomResourceSubResources`.
    - Update CRD strategy: if feature gate is disabled, this feature is dropped (i.e. set to `nil`).
- Add validation for `CustomResourceSubResources`:
    - `SpecReplicasPath` should not be empty and should be a valid json path under `.spec`. If there is no value under the given path in the CustomResource, the `/scale` subresource will return an error on GET.
    - `StatusReplicasPath` should not be empty and should be a valid json path under `.status`. If there is no value under the given path in the CustomResource, the status replica value in the /scale subresource will default to 0.
    - If present, `LabelSelectorPath` should be a valid json path. If there is no value under `LabelSelectorPath` in the CustomResource, the status label selector value in the `/scale` subresource will default to the empty string.
    - `ScaleGroupVersion` should be `autoscaling/v1`.
    - If `CustomResourceSubResources` is enabled, only `properties` is allowed under the root schema for CRD validation.

**Add status and scale subresources**:

- Use helper functions from `apimachinery/pkg/apis/meta/v1/unstructured/helpers.go`.
    - Improve error handling: kubernetes/kubernetes#56563, kubernetes/kubernetes#58215.
- Introduce Registry interface for storage.
- Update storage:
    - Introduce `CustomResourceStorage` which acts as storage for the custom resource and its status and scale subresources. Note: storage for status and scale is only enabled when the feature gate is enabled _and_ the respective fields are enabled in the CRD.
    - Introduce `StatusREST` and its `New()`, `Get()` and `Update()` methods.
    - Introduce `ScaleREST` and its `New()`, `Get()` and `Update()` methods.
        - Get and Update use the json paths from the CRD and use it to return an `autoscaling/v1.Scale` object.
- Update strategy:
    - In `PrepareForCreate`,
         - Clear `.status`.
         - Set `.metadata.generation` = 1
    - In `PrepareForUpdate`,
         - Do not update `.status`.
             - If both the old and new objects have `.status` and it is changed, set it back to its old value.
             - If the old object has a `.status` but the new object doesn't, set it to the old value.
             - If old object did not have a `.status` but the new object does, delete it.
         - Increment generation if spec changes i.e. in the following cases:
             - If both the old and new objects had `.spec` and it changed.
             - If the old object did not have `.spec` but the new object does.
             - If the old object had a `.spec` but the new object doesn't.
     - In `Validate` and `ValidateUpdate`,
        - ensure that values at `specReplicasPath` and `statusReplicasPath` are >=0 and < maxInt32.
        - make sure there are no errors in getting the value at all the paths.
    - Introduce `statusStrategy` with its methods.
        - In `PrepareForUpdate`:
            - Do not update `.spec`.
                - If both the old and new objects have `.spec` and it is changed, set it back to its old value.
                - If the old object has a `.spec` but the new object doesn't, set it to the old value.
                - If old object did not have a `.spec` but the new object does, delete it.
             - Do not update `.metadata`.
        - In `ValidateStatusUpdate`:
            - For CRD validation, validate only under `.status`.
            - Validate value at `statusReplicasPath` as above. If `labelSelectorPath` is a path under `.status`, then validate it as well.
- Plug into the custom resource handler:
    - Store all three storage - customResource, status and scale in `crdInfo`.
    - Use the storage as per the subresource in the request.
    - Use the validator as per the subresource (for status, only use the schema for `status`, if present).
    - Serve the endpoint as per the subresource - see `serveResource`, `serveStatus` and `serveScale`.
- Update discovery by adding the `/status` and `/scale` resources, if enabled.

**Add tests**:

- Add unit tests in `etcd_test.go`.
- Add integration tests.
    - In `subresources_test.go`, use the [polymporphic scale client](https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/client-go/scale) to get and update `Scale`.
    -  Add a test to check everything works fine with yaml in `yaml_test.go`.

**Release note**:

```release-note
`/status` and `/scale` subresources are added for custom resources.
```

Kubernetes-commit: 6e856480c05424b5cd2cfcbec692a801b856ccb2
sttts pushed a commit to sttts/apiextensions-apiserver that referenced this pull request Mar 1, 2018
Merge pull request #55168 from nikhita/customresources-subresources
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

apiextensions: add subresources for custom resources

Fixes #38113
Fixes #58778

**Related**:
- Proposal: kubernetes/community#913
- For custom resources to work with `kubectl scale`: kubernetes/kubernetes#58283

**Add types**:

- Add `CustomResourceSubResources` type to CRD.
    - Fix proto generation for `CustomResourceSubResourceStatus`: kubernetes/kubernetes#55970.
- Add feature gate for `CustomResourceSubResources`.
    - Update CRD strategy: if feature gate is disabled, this feature is dropped (i.e. set to `nil`).
- Add validation for `CustomResourceSubResources`:
    - `SpecReplicasPath` should not be empty and should be a valid json path under `.spec`. If there is no value under the given path in the CustomResource, the `/scale` subresource will return an error on GET.
    - `StatusReplicasPath` should not be empty and should be a valid json path under `.status`. If there is no value under the given path in the CustomResource, the status replica value in the /scale subresource will default to 0.
    - If present, `LabelSelectorPath` should be a valid json path. If there is no value under `LabelSelectorPath` in the CustomResource, the status label selector value in the `/scale` subresource will default to the empty string.
    - `ScaleGroupVersion` should be `autoscaling/v1`.
    - If `CustomResourceSubResources` is enabled, only `properties` is allowed under the root schema for CRD validation.

**Add status and scale subresources**:

- Use helper functions from `apimachinery/pkg/apis/meta/v1/unstructured/helpers.go`.
    - Improve error handling: kubernetes/kubernetes#56563, kubernetes/kubernetes#58215.
- Introduce Registry interface for storage.
- Update storage:
    - Introduce `CustomResourceStorage` which acts as storage for the custom resource and its status and scale subresources. Note: storage for status and scale is only enabled when the feature gate is enabled _and_ the respective fields are enabled in the CRD.
    - Introduce `StatusREST` and its `New()`, `Get()` and `Update()` methods.
    - Introduce `ScaleREST` and its `New()`, `Get()` and `Update()` methods.
        - Get and Update use the json paths from the CRD and use it to return an `autoscaling/v1.Scale` object.
- Update strategy:
    - In `PrepareForCreate`,
         - Clear `.status`.
         - Set `.metadata.generation` = 1
    - In `PrepareForUpdate`,
         - Do not update `.status`.
             - If both the old and new objects have `.status` and it is changed, set it back to its old value.
             - If the old object has a `.status` but the new object doesn't, set it to the old value.
             - If old object did not have a `.status` but the new object does, delete it.
         - Increment generation if spec changes i.e. in the following cases:
             - If both the old and new objects had `.spec` and it changed.
             - If the old object did not have `.spec` but the new object does.
             - If the old object had a `.spec` but the new object doesn't.
     - In `Validate` and `ValidateUpdate`,
        - ensure that values at `specReplicasPath` and `statusReplicasPath` are >=0 and < maxInt32.
        - make sure there are no errors in getting the value at all the paths.
    - Introduce `statusStrategy` with its methods.
        - In `PrepareForUpdate`:
            - Do not update `.spec`.
                - If both the old and new objects have `.spec` and it is changed, set it back to its old value.
                - If the old object has a `.spec` but the new object doesn't, set it to the old value.
                - If old object did not have a `.spec` but the new object does, delete it.
             - Do not update `.metadata`.
        - In `ValidateStatusUpdate`:
            - For CRD validation, validate only under `.status`.
            - Validate value at `statusReplicasPath` as above. If `labelSelectorPath` is a path under `.status`, then validate it as well.
- Plug into the custom resource handler:
    - Store all three storage - customResource, status and scale in `crdInfo`.
    - Use the storage as per the subresource in the request.
    - Use the validator as per the subresource (for status, only use the schema for `status`, if present).
    - Serve the endpoint as per the subresource - see `serveResource`, `serveStatus` and `serveScale`.
- Update discovery by adding the `/status` and `/scale` resources, if enabled.

**Add tests**:

- Add unit tests in `etcd_test.go`.
- Add integration tests.
    - In `subresources_test.go`, use the [polymporphic scale client](https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/client-go/scale) to get and update `Scale`.
    -  Add a test to check everything works fine with yaml in `yaml_test.go`.

**Release note**:

```release-note
`/status` and `/scale` subresources are added for custom resources.
```

Kubernetes-commit: 6e856480c05424b5cd2cfcbec692a801b856ccb2
sttts pushed a commit to sttts/apiextensions-apiserver that referenced this pull request Mar 1, 2018
Merge pull request #55168 from nikhita/customresources-subresources
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

apiextensions: add subresources for custom resources

Fixes #38113
Fixes #58778

**Related**:
- Proposal: kubernetes/community#913
- For custom resources to work with `kubectl scale`: kubernetes/kubernetes#58283

**Add types**:

- Add `CustomResourceSubResources` type to CRD.
    - Fix proto generation for `CustomResourceSubResourceStatus`: kubernetes/kubernetes#55970.
- Add feature gate for `CustomResourceSubResources`.
    - Update CRD strategy: if feature gate is disabled, this feature is dropped (i.e. set to `nil`).
- Add validation for `CustomResourceSubResources`:
    - `SpecReplicasPath` should not be empty and should be a valid json path under `.spec`. If there is no value under the given path in the CustomResource, the `/scale` subresource will return an error on GET.
    - `StatusReplicasPath` should not be empty and should be a valid json path under `.status`. If there is no value under the given path in the CustomResource, the status replica value in the /scale subresource will default to 0.
    - If present, `LabelSelectorPath` should be a valid json path. If there is no value under `LabelSelectorPath` in the CustomResource, the status label selector value in the `/scale` subresource will default to the empty string.
    - `ScaleGroupVersion` should be `autoscaling/v1`.
    - If `CustomResourceSubResources` is enabled, only `properties` is allowed under the root schema for CRD validation.

**Add status and scale subresources**:

- Use helper functions from `apimachinery/pkg/apis/meta/v1/unstructured/helpers.go`.
    - Improve error handling: kubernetes/kubernetes#56563, kubernetes/kubernetes#58215.
- Introduce Registry interface for storage.
- Update storage:
    - Introduce `CustomResourceStorage` which acts as storage for the custom resource and its status and scale subresources. Note: storage for status and scale is only enabled when the feature gate is enabled _and_ the respective fields are enabled in the CRD.
    - Introduce `StatusREST` and its `New()`, `Get()` and `Update()` methods.
    - Introduce `ScaleREST` and its `New()`, `Get()` and `Update()` methods.
        - Get and Update use the json paths from the CRD and use it to return an `autoscaling/v1.Scale` object.
- Update strategy:
    - In `PrepareForCreate`,
         - Clear `.status`.
         - Set `.metadata.generation` = 1
    - In `PrepareForUpdate`,
         - Do not update `.status`.
             - If both the old and new objects have `.status` and it is changed, set it back to its old value.
             - If the old object has a `.status` but the new object doesn't, set it to the old value.
             - If old object did not have a `.status` but the new object does, delete it.
         - Increment generation if spec changes i.e. in the following cases:
             - If both the old and new objects had `.spec` and it changed.
             - If the old object did not have `.spec` but the new object does.
             - If the old object had a `.spec` but the new object doesn't.
     - In `Validate` and `ValidateUpdate`,
        - ensure that values at `specReplicasPath` and `statusReplicasPath` are >=0 and < maxInt32.
        - make sure there are no errors in getting the value at all the paths.
    - Introduce `statusStrategy` with its methods.
        - In `PrepareForUpdate`:
            - Do not update `.spec`.
                - If both the old and new objects have `.spec` and it is changed, set it back to its old value.
                - If the old object has a `.spec` but the new object doesn't, set it to the old value.
                - If old object did not have a `.spec` but the new object does, delete it.
             - Do not update `.metadata`.
        - In `ValidateStatusUpdate`:
            - For CRD validation, validate only under `.status`.
            - Validate value at `statusReplicasPath` as above. If `labelSelectorPath` is a path under `.status`, then validate it as well.
- Plug into the custom resource handler:
    - Store all three storage - customResource, status and scale in `crdInfo`.
    - Use the storage as per the subresource in the request.
    - Use the validator as per the subresource (for status, only use the schema for `status`, if present).
    - Serve the endpoint as per the subresource - see `serveResource`, `serveStatus` and `serveScale`.
- Update discovery by adding the `/status` and `/scale` resources, if enabled.

**Add tests**:

- Add unit tests in `etcd_test.go`.
- Add integration tests.
    - In `subresources_test.go`, use the [polymporphic scale client](https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/client-go/scale) to get and update `Scale`.
    -  Add a test to check everything works fine with yaml in `yaml_test.go`.

**Release note**:

```release-note
`/status` and `/scale` subresources are added for custom resources.
```

Kubernetes-commit: 6e856480c05424b5cd2cfcbec692a801b856ccb2
@abbi-gaurav

This comment has been minimized.

Copy link

commented Mar 16, 2018

Is there any related issue for the status implementation. I can see one for the scale though.

// object sent as the payload for /scale. It allows transition
// to future versions easily.
// Today only autoscaling/v1 is allowed.
ScaleGroupVersion schema.GroupVersion `json:"groupVersion"`

This comment has been minimized.

Copy link
@lavalamp

lavalamp Mar 16, 2018

Member

I thought we agreed to remove this field in another thread? Did I miss some followup?

This comment has been minimized.

Copy link
@nikhita

nikhita Mar 18, 2018

Author Member

I thought we agreed to remove this field in another thread? Did I miss some followup?

It is removed. 👍

The proposal included this field. It was later decided in the implementation PR to remove it and so it was removed there.

Only the proposal contains it. It needs to be updated (will do).

@nikhita

This comment has been minimized.

Copy link
Member Author

commented Mar 18, 2018

Is there any related issue for the status implementation. I can see one for the scale though.

@abbi-gaurav kubernetes/kubernetes#55168 contains the implementation for scale as well as status. You can find docs on how to use status here: https://deploy-preview-7439--kubernetes-io-vnext-staging.netlify.com/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/#status-subresource.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.