Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add subresource marker to workload CRD #12360

Merged
merged 3 commits into from Mar 28, 2024
Merged

Conversation

mateiidavid
Copy link
Member

@mateiidavid mateiidavid commented Mar 28, 2024

Our ExternalWorkload resource has a status field, but the status is not marked as a subresource in the object's schema. Status patches are done in libraries through a separate interface; without marking the status as a subresource, the API Server will respond to patch requests with a 404. This makes ExternalWorkload resource statuses unpatachable from controllers.

We fix the issue by marking the status as a subresource in the v1beta1 schema. No codegen changes are necessary. The version is not bumped since this does not change the existing contract offered by an ExternalWorkload; it only allows the API Server to treat its status as a subresource when patching it (i.e. we can use the patch_status interface).

Additional context:

  • In Kubernetes, each resource has its own declarative API that can be used to change its state.
  • Resources may optionally include other declarative APIs that are decoupled from the main resource's state; this includes Scale and Status subresources. They can be thought of as a set of shared interfaces that add additional information to a resource.
  • Statuses are meant to be patched through a separate interface as a result. This allows both:
    • A separation of concerns: either patch the spec or the status but not both to avoid overwriting or deleting fields
    • Principle of least privileged: fine-grained RBAC can be used to isolate spec writes from status writes.
  • Subresources get their own API paths, writing to a subresource means we are effectively sending a requested to a nested path (e.g. /status on a pod). The API server needs to know this path is available.
  • CRDs require that fields are marked as a subresource, without doing so, the API Server will reply with a 404 Not Found when attempting to modify a status, since the path doesn't exist (I assume).

See:

Testing

I tested we can patch the status with a toy controller. kubectl patch --subresource='status' should work as well.

I also tested we can upgrade from edge to current build:

  • Install edge.
  • Upgrade CRDs
  • Restart workloads
  • Upgrade linkerd

To test patching works, apply:

apiVersion: workload.linkerd.io/v1beta1
kind: ExternalWorkload
metadata:
  name: test-external
  namespace: default
spec:
  workloadIPs:
  - ip: 192.0.2.0
  meshTLS:
    identity: test-external.default.svc.cluster.local
    serverName: test-external.default.svc.cluster.local
  ports:
  - port: 80

Before the change, patch requests receive a 404 Not Found

:; kubectl patch externalworkloads.workload.linkerd.io test-external --subresource='status' --type='merge' -p '{"status":{"conditions": {"type": "test"}}}'
Error from server (NotFound): externalworkloads.workload.linkerd.io "test-external" not found

After, patch requests are processed

Warning: unknown field "status.conditions.type"
The ExternalWorkload "test-external" is invalid:
* status.conditions: Invalid value: "object": status.conditions in body must be of type array: "object"
* status.conditions.status: Required value
* status.conditions.type: Required value

Our ExternalWorkload resource has a status field, but the status is not
marked as a subresource in the object's schema. Status patches are done
in libraries through a separate interface; without marking the status as
a subresource, the API Server will respond to patch requests with a 404.
This makes ExternalWorkload resource statuses unpatachable from
controllers.

We fix the issue by marking the status as a subresource in the `v1beta1`
schema. No codegen changes are necessary. The version is not bumped
since this does not change the existing contract offered by an
ExternalWorkload; it only allows the API Server to treat its status as a
subresource when patching it (i.e. we can use the `patch_status`
interface).

Additional context:
 * In Kubernetes, each resource has its own declarative API that can be
   used to change its state.
 * Resources may optionally include other declarative APIs that are
   decoupled from the main resource's state; this includes `Scale` and
   `Status` subresources. They can be thought of as a set of shared
   interfaces that add additional information to a resource.
 * Statuses are meant to be patched through a separate interface as a
   result. This allows both:
   * A separation of concerns: either patch the spec or the status but
     not both to avoid overwriting or deleting fields
   * Principle of least privileged: fine-grained RBAC can be used to
     isolate spec writes from status writes.
 * Subresources get their own API paths, writing to a subresource means
   we are effectively sending a requested to a nested path (e.g.
   `/status` on a pod). The API server needs to know this path is
   available.
 * CRDs require that fields are marked as a subresource, without doing
   so, the API Server will reply with a 404 Not Found when attempting to
   modify a status, since the path doesn't exist (I assume).

See:
* [Kubernetes docs](https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/#scale-kubectl-patch)
* [API conventions](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status)

Signed-off-by: Matei David <matei@buoyant.io>
@mateiidavid mateiidavid requested a review from a team as a code owner March 28, 2024 13:11
Copy link
Member

@alpeb alpeb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense to me, thanks for the detailed explanation 👍
Doesn't this require finer RBAC on the externalworkloads/status resource? Or that would only apply to controllers attempting to patch that?

@mateiidavid
Copy link
Member Author

Doesn't this require finer RBAC on the externalworkloads/status resource? Or that would only apply to controllers attempting to patch that?

Yup, it would require RBAC but indeed only if we want to patch it. None of our controllers do that.

Signed-off-by: Matei David <matei@buoyant.io>
@alpeb alpeb merged commit 6d741cc into main Mar 28, 2024
34 checks passed
@alpeb alpeb deleted the matei/add-status-subresource branch March 28, 2024 18:57
the-wondersmith pushed a commit to the-wondersmith/linkerd2 that referenced this pull request Apr 24, 2024
* Add subresource marker to workload CRD

Our ExternalWorkload resource has a status field, but the status is not
marked as a subresource in the object's schema. Status patches are done
in libraries through a separate interface; without marking the status as
a subresource, the API Server will respond to patch requests with a 404.
This makes ExternalWorkload resource statuses unpatachable from
controllers.

We fix the issue by marking the status as a subresource in the `v1beta1`
schema. No codegen changes are necessary. The version is not bumped
since this does not change the existing contract offered by an
ExternalWorkload; it only allows the API Server to treat its status as a
subresource when patching it (i.e. we can use the `patch_status`
interface).

Additional context:
 * In Kubernetes, each resource has its own declarative API that can be
   used to change its state.
 * Resources may optionally include other declarative APIs that are
   decoupled from the main resource's state; this includes `Scale` and
   `Status` subresources. They can be thought of as a set of shared
   interfaces that add additional information to a resource.
 * Statuses are meant to be patched through a separate interface as a
   result. This allows both:
   * A separation of concerns: either patch the spec or the status but
     not both to avoid overwriting or deleting fields
   * Principle of least privileged: fine-grained RBAC can be used to
     isolate spec writes from status writes.
 * Subresources get their own API paths, writing to a subresource means
   we are effectively sending a requested to a nested path (e.g.
   `/status` on a pod). The API server needs to know this path is
   available.
 * CRDs require that fields are marked as a subresource, without doing
   so, the API Server will reply with a 404 Not Found when attempting to
   modify a status, since the path doesn't exist (I assume).

See:
* [Kubernetes docs](https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/#scale-kubectl-patch)
* [API conventions](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status)

Signed-off-by: Matei David <matei@buoyant.io>

* Golden files

Signed-off-by: Matei David <matei@buoyant.io>

---------

Signed-off-by: Matei David <matei@buoyant.io>
Signed-off-by: Mark S <the@wondersmith.dev>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants