Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: KEP-3920 finish feature flags KEP #3920

Closed
wants to merge 5 commits into from

Conversation

lavalamp
Copy link
Member

  • One-line PR description:
  • Issue link:
  • Other comments:

@k8s-ci-robot k8s-ci-robot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Mar 21, 2023
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: lavalamp

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory approved Indicates a PR has been approved by an approver from all required OWNERS files. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. labels Mar 21, 2023
@lavalamp lavalamp changed the title WIP: KEP-NNNN finish feature flags KEP WIP: KEP-3920 finish feature flags KEP Mar 21, 2023
* Make it possible to get the cluster back into the "off" state
* Expose the "off but dangling references / uses" state
* Make it easy to add enablement and disablement code
* Make it easy to add upgrade and downgrade code
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would this (eventually) obviate the need for storage version migrator?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably not, but this would provide a convenient place to trigger it, or wait for it to complete.

Copy link
Member

@johnbelamaric johnbelamaric left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great idea! A few thoughts.

keps/sig-api-machinery/3920-finish-feature-flags/README.md Outdated Show resolved Hide resolved

Kubernetes uses feature gates for a wide variety of features. Unfortunately,
there are many problems and missing ablities with the current system (see
below). This KEP solves these problems. Since it is wide reaching, we break
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would help to give a short phrase hinting at what is to come, otherwise the stages don't make sense until a second reading. So, something like:

...with the current system, especially with respect to runtime discovery, enablement and disablement of features. This KEP solves...

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The list of problems is long, I want people to actually read it. Would "read on for the list of problems and then the list of solutions" be better?

Copy link
Member

@johnbelamaric johnbelamaric Mar 24, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem is I have no idea what you are talking about and then you start talking about "stages" as if I should know why there should be stages, and the meaning of the contents of those stages.

I think people will still read it, you just need a hint as to where you are going - the actual problems, IMO.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will make changes.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

look again?


## Proposal

### High-level design
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be a lot easier to understand if you discussed the roles that will be interacting with the Feature API - platform admin, cluster admin, feature developer, feature consumer (user), others? - and then explained the uses for those different roles.

As it reads now, I feel like I am missing context, at least on the first read.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was trying to do that and introduce them one at a time, but I can uh give a list or something?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, something to help people build a mental model or frame it in context. Even if you define them one at a time, that could be OK, but you need to explain who they are when you introduce them, not leave it as implicit. I just felt dropped in the middle of a thought process as it is now.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I pushed some changes, is this better?

when it is constructed, because that happens before the command line is parsed /
config files are read.

Our new client will use the existing MutableFeatureGate to read the command
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So the client is a binary, and using this to read its own command line for feature gate values? Or the client is the author of a feature that is included in the binary?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The former, I will clarify. The latter is a dev and devs write code which is a client (both today and in the future) and in the future may also write code for the server flow.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, sorry, I wasn't clear either, for the latter I meant: "the code embedded in the binary, checking if the feature is on in order to do it's thing or not".

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given that one binary supports many features, the parsing of the command line is code written only once by the binary author (or maybe automatic in a lib), whereas checking the feature gate is code every feature author will need to write. So, trying to understand which flow this is, and why we start with it. I would think we would start with how feature gates are consumed by feature authors in their feature code; that's the most common use.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The way it works today is that there's a FeatureGate implementation already written for people, the binary declares it statically and installs its flags in the command line parser, and feature authors call Enabled(f) for each feature f. The only thing in this story that will change is where the FeatureGate implementation lives. This is all part of the client flow, it seems like you're asking which half of this I'm talking about, but I'm talking about all of it.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does the description of "clients" I added above fix this or is it still unclear?


Obviously, we will add an API object, Feature (details below). Features will be
namespaced; this will permit all Kubernetes features to be in the kube-system
namespace, and third parties can use another namespace.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I love that this can be used by third parties, that is really nice. What roles will have write access to the Feature API?

As I read this, it sounds like you are saying: anyone (with the right permissions) can add features in a namespace, and then this allows third parties to use this feature API in their own namespace to manage features of their component. Upstream k8s components will put their features in the kube-system namespace.

That's a reasonable approach. However, here's a crazy thought. Could we instead create a ClusterFeature and Feature, ala ClusterRole and Role, and allow features to be enabled and disabled on a per-namespace basis?

Whether it is an upstream feature or a third-party feature would just be an issue of the name of the Feature, and namespace would be irrelevant. On a per-namespace basis, a feature could be used to override the global state (if allowed). Something like:

ClusterFeature:

  • on: on cluster wide, cannot be overridden per namespace
  • default-on: on cluster wide, can be overridden per namespace
  • off: off cluster wide, cannot be overridden per namespace
  • default-off: off cluster wide, can be overridden per namespace

Feature of the same name can then be defined in a namespace as on/off, and of course status will reflect the composition of that and ClusterFeature.

If the ClusterFeature does not exist, then it is assumed to be "default-off". This allows third-parties to manage their namespaced features without a cluster or platform admin adding the ClusterFeature.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Obviously for some features, only cluster-wide makes sense. But this could be a nice way to safely try features that only have impact on individual resources or can otherwise be namespace scoped.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do want to add canarying capabilities to this eventually, but not all features can be on/off on a namespace basis. I was going to try and avoid discussing how to make canarying work until we can agree on the rest of it :)

I was already planning on using namespaces to permission different classes of features differently, I don't think I should also use them for canarying in this way-- I have other plans that I'd like to keep vague until later :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, sounds good. I do think it's a step beyond "canarying", in the sense that namespaced feature enablement actually enables better multi-tenancy in the cluster (one less thing everyone has to agree on). I think what I describe above enables the same functionality the use of namespaces in the KEP does, plus more. But perhaps it conflicts with your thinking on the canarying, I look forward to seeing that.

@johnbelamaric
Copy link
Member

One other possible use case for this is around discovering whether a particular workload can run on a given cluster. It would be great if we could inspect a set of manifests, and determine if the K8s cluster in question has the right features enabled to allow it to be successfully deployed. For example, does the cluster support Gateway API? RBAC? Metrics? Multus? Making that discoverable would be super useful in a lot of contexts.

This goes a bit beyond "features" though into "capabilities", so maybe it's a different but related API for a different KEP. But I think both have a lot of overlap so it's worth exploring if the same API could satisfy both.

@lavalamp
Copy link
Member Author

discovering whether a particular workload can run on a given cluster

Yes, that's intended but I forgot to mention, will add.

@pacoxu
Copy link
Member

pacoxu commented Mar 25, 2023

/assign

* It is especially difficult to know if use of a feature at some point makes
cluster upgrade unsafe now; for example, an alpha object being written while
the feature was briefly on which is incompatable with the default-on beta.
* It is not easy to test the feature on -> use -> feature off lifecycle.
Copy link
Member

@pacoxu pacoxu Mar 29, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if this is also a motivation case that we find a user with 3 apiservers but different configurations.

This is similar to third state, off-with-dangling-references, and we may call it create-when-disabled-and-update-when-enabled or create-when-enabled-and-update-when-disabled.

Copy link
Member

@pacoxu pacoxu Mar 29, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another case is kubelet feature gates that is not enabled but apiserver has enabled the FG. For such instances, should this KEP cover?

kubelet configuration is exposed to configz and this includes the --feature-gates part so this is stable.

[root@paco-centos-9 ~]# curl -k --key /etc/kubernetes/pki/apiserver-kubelet-client.key --cert /etc/kubernetes/pki/apiserver-kubelet-client.crt https://127.0.0.1:10250/configz | jq .kubeletconfig.featureGates

{
  "LocalStorageCapacityIsolationFSQuotaMonitoring": true,
  "NodeSwap": false
}

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The case of apiservers with differing configurations is a great consideration, I will add text explaining what happens (basically, the leader determines which state the cluster is "trying" to get into, but the uses--the client flow--prevent it from actually getting into that state until all apiservers are put into the same condition).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added, should cover both cases now.

The Feature object will record, per-feature, its configured desired value,
default value, uses, and current version. Most importantly, it will record the
current state of the feature. There will be at least four states: On, Off,
TurningOn, and TurningOff.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we would benefit a lot if you would actually provide the API structure here.

What I'm wondering about is how we represent the state (and how we generally track it) if the feature-gate is used by multiple components (let's say apiserver, scheduler and kubelets). Then technically the feature is on only when all of them are actually enabled.
Also - what we're doing with cases with 1 (or small percent) of kuebelets (or any other node components) are not changing their flag?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So we can know the status of the FGs by kubectl get features.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was going to save the API type for the detailed design section, which I haven't written yet. But you can look at the prototype here. There's some stuff about it in the prototype that is wrong and needs to change, and there's other stuff that is right but counterintuitive, so reader beware, it will take some mental effort to tell the difference if you follow that link :)

Copy link
Contributor

@pohly pohly left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sometimes a feature depends on a specific new API group. For example, Dynamic Resource Allocation comes with resource.k8s.io/v1alpha2.

I don't see adding or removing API groups mentioned in the document (but I may have missed it). Can this be added?

Along the same line, removing fields that are feature-gated is mentioned. What would be useful is removal of entire objects that make no sense when a feature is off.

@lavalamp
Copy link
Member Author

I don't see adding or removing API groups mentioned in the document (but I may have missed it). Can this be added?

and

Along the same line, removing fields that are feature-gated is mentioned. What would be useful is removal of entire objects that make no sense when a feature is off.

This is basically the same request, since groups and resources are added in the same startup mechanism. The way this can work is if the feature author writes the code to only add the resource and/or group if a feature flag is on.

I didn't include this as a story since it's possible (and already done?) today, but I can add that.

@pohly
Copy link
Contributor

pohly commented Mar 30, 2023

The way this can work is if the feature author writes the code to only add the resource and/or group if a feature flag is on.

But that is not how it is currently done, is it? At the moment, the admin needs to enable API groups independently from feature gates. kube-apiserver has two different parameters for this (--runtime-config and --feature-gates).

My first point wasn't about adding or removing the type definitions. It was about removing the objects from the etcd stored before removing the type. But I guess the part about "remove fields" also applies to that, it's just not mentioned.

keps/sig-api-machinery/3920-finish-feature-flags/README.md Outdated Show resolved Hide resolved
should be approved by the remaining approvers and/or the owning SIG (or
SIG Architecture for cross-cutting KEPs).
-->
# KEP-3920: Finish Feature Flags
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: this number comes from an issue, not the initial PR #, to give a stable link destination

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PR and issue numbers are from the same sequence, I don't think it matters?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm pretty sure the KEP number is supposed to be the issue that tracks the feature through its lifetime and gets updated release to release. Whatever number this uses will be what https://kep.k8s.io/3920 links to, which you probably want to be the issue, not the first PR to touch the design

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They will link to each other so I don't see why it matters, I never want to look at that issue, what I want to look at is the text of the KEP anyway

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but... this PR will be an increasingly stale snapshot of the initial draft of the text :)

the issue description can link right to the merged current state

Comment on lines +181 to +184
* Expose the in-use feature states in use in the cluster, and make it easy to
read them
* Make it easy for admins to know when all binaries in the cluster agree about
feature states.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a little hard to tell if this is just surfacing information, or proposing modifying how we default feature enablement based on that information.

I'm very interested in improving how we deal with scenarios like this:

  • a control plane with all nodes on 1.x+ enables features considered stable enough to default on in 1.x (up-to-date clusters get new features asap)
  • a control plane with nodes older than 1.x avoids enabling features that require 1.x nodes (slow-moving clusters prioritize safety at the expense of feature velocity)
  • a control plane with enabled features that require 1.x nodes handles an attempt to add a node older than 1.x by _____ (rejecting the node? disabling the feature? something else?)
  • a control plane with nodes older than 1.x indicates it is not safe to upgrade the control plane beyond supported node/control-plane skew

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So far this KEP describes surfacing information and the enablement/disablement flow. I need to add the version change flow also.

I think you are asking for some code to run to automatically change a feature state in a direction desired by devs if no preference has been expressed and it is safe to do so, looking at the rest of the cluster. I have not said anything about that in the KEP yet. I think that is possible and desirable but I need to write the version change flow down first.

I think the main requirement from this is that it should be possible to tell if a value is the default, was set on the command line, or was set dynamically (if permitted).

Also if there are multiple such default-changes-due-to-newly-found-safe-conditions, they should not all happen at the same time. But the issue is, what if it breaks the cluster? that will take some designing to get right.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding a follow-up goal.

@lavalamp
Copy link
Member Author

lavalamp commented Apr 3, 2023

The way this can work is if the feature author writes the code to only add the resource and/or group if a feature flag is on.

But that is not how it is currently done, is it? At the moment, the admin needs to enable API groups independently from feature gates. kube-apiserver has two different parameters for this (--runtime-config and --feature-gates).

Yeah for the feature flag to be a useful way to turn things on and off, the author would have to add the group by default, or the admin has to configure it and the binary needs to not crash when the group is requested but the flag is off. This is possibly different than what we do today. I don't think it will be hard to persuade people to do it differently once feature flags are actually useful. @deads2k at least partially disagrees with me.

My first point wasn't about adding or removing the type definitions. It was about removing the objects from the etcd stored before removing the type. But I guess the part about "remove fields" also applies to that, it's just not mentioned.

Yeah, removing objects when the collection goes away (or even changes alpha to beta) should be in here somewhere, I'll double check.

@k8s-ci-robot
Copy link
Contributor

@lavalamp: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-enhancements-verify cc13a7d link true /test pull-enhancements-verify

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@lavalamp
Copy link
Member Author

lavalamp commented Apr 4, 2023

I have added API details & test plan, PTAL

Announcing features. The server will create Feature objects about the features
it is aware of. If a Feature object exists for a feature the server doesn't know
about -- due to an upgrade or downgrade -- it will be preserved for a number of
versions.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This makes sense as long as we have a mechanism that cleans it up eventually (say 3 releases later or sth like that).
Is your plan to actually delete it eventually? What will be the mechanism for doing this?

// `class` is the class of feature. "kube-system" indicates
// the feature is about the host cluster. Third parties may use a
// domain name if they wish to reuse this system for their own
// canarying. `class` should match `metadata.namespace`.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If it should match metadata.namespace, then what's the point of having it?

Stability StabilityLevel `json:"stability" protobuf:"bytes,3,opt,name=stability,casttype=StabilityLevel"`

// `version` declares the version of software currently providing this feature.
Version string `json:"version"`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What exactly this means if feature implementation is spread across multiple components (in particular control-plane and nodes) and these are not in the same version?

Also - why this is actually needed?

```
// `uses` is for clients to report their use of the feature. Clients
// should report their use only if there is not already an entry
// matching their condition; this keeps this field very low-qps no
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the condition here? I don't see conditions in this API..

// non-desired-state uses and wait for clients to add them back, as a
// way of telling whether a state transition has completed or not. When
// that happens, `useEvaluationTime` will be set to a time in the
// future; clients have until then to record their use.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I'm still not fully following how this field is supposed to be used.

As an example - if the feature is implemented in Kubelet, then how it will be represented here?
(1) a single "kubelet" entry? - then how do we distintgush e.g. nodes in different versions?
(2) per-kubelet entry - then it doesn't really scale to clusters with 5k nodes or sth...
(3) something else?

[And for "doesn't really scale" - I'm not even saying about the write load on this object [which you have some ideas, although I don't yet really follow it], but even the base object size that can become problematic and not fit our "max object size" limits...

@pacoxu
Copy link
Member

pacoxu commented Apr 25, 2023

BTW, https://openfeature.dev/ the openfeature is a CNCF project. Do we have a plan to use it in k/k?

@sftim
Copy link
Contributor

sftim commented Jul 20, 2023

What will it take to move this out of WIP?

should be approved by the remaining approvers and/or the owning SIG (or
SIG Architecture for cross-cutting KEPs).
-->
# KEP-3920: Finish Feature Flags
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Try:

# KEP-XXX: Improved feature gate handling

The proposal is to make an improvement; finishing the implementation is implied in every proposal we accept.

Comment on lines +289 to +292
Selecting a leader. Since multiple apiservers are running, we will pick one to
be the leader. We will use the apiserver identity feature to know the set of
apiservers. We will use a deterministic hashing mechanism to select the leader,
so that no additional traffic is necessary.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How will this work for feature gates that affect leader election? (might be an unresolved question)


#### Alpha

- Feature implemented behind a feature flag
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's detail exactly how this would work and any important corner cases. Even for alpha.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 20, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle rotten
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 19, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Reopen this PR with /reopen
  • Mark this PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closed this PR.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Reopen this PR with /reopen
  • Mark this PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet