Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GC changes for synchronous GC supports #38679

Closed
wants to merge 19 commits into from

Conversation

caesarxuchao
Copy link
Member

@caesarxuchao caesarxuchao commented Dec 13, 2016

Split from #38676.
Design doc: https://github.com/kubernetes/community/blob/master/contributors/design-proposals/synchronous-garbage-collection.md.

This PR won't compile. It's for review only.

Comments that will be addressed in follow-up PRs:

  1. check if an update event changes DeletionTimestamp, OwnerReferences, or Finalizers, if not, don't send it via the graphChanges channel.
  2. In attempToDelete, try to use the cache instead of communicating with the API server. One exception: We need to talk to the API server to check if the object represented by a "virtual" node exists.

TODO:

cc @kubernetes/sig-api-machinery

@caesarxuchao caesarxuchao added the sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. label Dec 13, 2016
@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Dec 13, 2016
@k8s-reviewable
Copy link

This change is Reviewable

@k8s-github-robot k8s-github-robot added size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. release-note-label-needed labels Dec 13, 2016
go monitor.controller.Run(stopCh)
}

wait.PollInfinite(10*time.Second, func() (bool, error) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

Register()
<-stopCh
glog.Infof("Garbage Collector: Shutting down")
gc.dirtyQueue.ShutDown()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should defer these

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

// solid: the owner exists, and is not "waiting"
// dangling: the owner does not exist
// waiting: the owner exists, its deletionTimestamp is non-nil, and it has
// FianlizerDeletingDependents
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo

// FianlizerDeletingDependents
// This function communicates with the server.
func (gc *GarbageCollector) classifyReferences(item *node, latestReferences []metav1.OwnerReference) (
solid []metav1.OwnerReference,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe solid, dangling, waiting []metav1.OwnerReference, ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

continue
glog.V(6).Infof("object %s's owner %s/%s, %s is not found", item.identity.UID, reference.APIVersion, reference.Kind, reference.Name)
dangling = append(dangling, reference)
} else {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Move this check on top

if !errors.IsNotFound(err) {
   return solid, dangling, waiting, err
}
gc.absentOwnerCache.Add(reference.UID)
...

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

// orphanFinalizer dequeues a node from the orphanQueue, then finds its dependents
// based on the graph maintained by the GC, then removes it from the
// OwnerReferences of its dependents, and finally updates the owner to remove
// the "Orphan" finalizer. The node is add back into the orphanQueue if any of
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/add/added/

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

Copy link
Member Author

@caesarxuchao caesarxuchao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, @Kargakis. Addressed.

go monitor.controller.Run(stopCh)
}

wait.PollInfinite(10*time.Second, func() (bool, error) {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

Register()
<-stopCh
glog.Infof("Garbage Collector: Shutting down")
gc.dirtyQueue.ShutDown()
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

// FianlizerDeletingDependents
// This function communicates with the server.
func (gc *GarbageCollector) classifyReferences(item *node, latestReferences []metav1.OwnerReference) (
solid []metav1.OwnerReference,
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

continue
glog.V(6).Infof("object %s's owner %s/%s, %s is not found", item.identity.UID, reference.APIVersion, reference.Kind, reference.Name)
dangling = append(dangling, reference)
} else {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

// orphanFinalizer dequeues a node from the orphanQueue, then finds its dependents
// based on the graph maintained by the GC, then removes it from the
// OwnerReferences of its dependents, and finally updates the owner to remove
// the "Orphan" finalizer. The node is add back into the orphanQueue if any of
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.


const ResourceResyncTime time.Duration = 0

type monitor struct {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reading from top to bottom, this needs doc.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The struct is removed. I added a comment to GarbageCollector.monitors.

const ResourceResyncTime time.Duration = 0

type monitor struct {
store cache.Store
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a very loose type. Any chance of using GenericLister instead? It looks like its filled via a reflector.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The struct is removed.

// clientPool uses the regular dynamicCodec. We need it to update
// finalizers. It can be removed if we support patching finalizers.
clientPool dynamic.ClientPool
dirtyQueue *workqueue.TimedWorkQueue
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like this could do with an interface.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, is this type nothing but casting on the normal WorkQueue?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It adds the timing metrics to the workqueue. I think the normal workqueue supports timing metrics now, though missing some functionalities. I can try to migrate to use the normal workqueue in another PR.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It adds the timing metrics to the workqueue. I think the normal workqueue supports timing metrics now, though missing some functionalities. I can try to migrate to use the normal workqueue in another PR.

It would enhance my reading pleasure and allow you to wire up to a named rate limited queue. What's the delta on metrics being gathered?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Refactored to use the rate limited queue.

// clientPool uses the regular dynamicCodec. We need it to update
// finalizers. It can be removed if we support patching finalizers.
clientPool dynamic.ClientPool
dirtyQueue *workqueue.TimedWorkQueue
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Having two queues is uncommon. Doc please.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

dirtyQueue *workqueue.TimedWorkQueue
orphanQueue *workqueue.TimedWorkQueue
monitors []monitor
propagator *Propagator
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a dependencyGraphBuilder, right?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

dependencyGraphBuilder + workQueueBuilder

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

dependencyGraphBuilder + workQueueBuilder

After going through this on slack with you, I think I'd like to call this a dependencyGraphBuilder and give it an inbound and outbound queue (graphUpdates and attemptDeletes?)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

graphUpdate will be a renaming of eventQueue.

Currently propagator has two outbounds queue, dirtyQueue and orphanQueue, I want to merge them into one, but the changes to the logic could be complex, so I want to do it in another PR.

I think attemptDeletes will just be a renaming of dirtyQueue.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Renamed the queues.

orphanQueue *workqueue.TimedWorkQueue
monitors []monitor
propagator *Propagator
clock clock.Clock
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just for testing?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's used to time how long an item spends in queues.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's used to time how long an item spends in queues.

Yeah, that's a metric we gather.

monitors []monitor
propagator *Propagator
clock clock.Clock
registeredRateLimiter *RegisteredRateLimiter
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are just for registering prometheus metrics or something?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just for prometheus metrics.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comments added.

registeredRateLimiter *RegisteredRateLimiter
registeredRateLimiterForMonitors *RegisteredRateLimiter
// GC caches the owners that do not exist according to the API server.
absentOwnerCache *UIDCache
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like this would be discoveryable via a marker on nodes. The graph does it include these, right? Otherwise edge counting won't work how most people would expect.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The absentCache provides a slightly stronger guarantee than the graph does. If the object doesn't exist in the graph, it could be that the informer hasn't observed it yet. But if an object's UID is in the absentCache, the object couldn't exist.


func (gc *GarbageCollector) monitorFor(resource schema.GroupVersionResource, kind schema.GroupVersionKind) (monitor, error) {
// TODO: consider store in one storage.
glog.V(6).Infof("create storage for resource %s", resource)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

logging at this level gets caught up in RESTClient logging. I'd make it a five instead.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

gc.dirtyQueue.Add(timedItem)
return
}
DirtyProcessingLatency.Observe(sinceInMicroseconds(gc.clock, timedItem.StartTime))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pretty sure this is now tracked by a named work queue automatically.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I dequeue an object, process it, but an error occurs so I enqueue it once more, the normal work queue doesn't know it's the same object, right? I can try to expand the normal work queue to support such case.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I dequeue an object, process it, but an error occurs so I enqueue it once more, the normal work queue doesn't know it's the same object, right?

If you add rate limited, it knows.

return monitor, nil
}

var ignoredResources = map[schema.GroupVersionResource]struct{}{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know this is pre-existing, but make a note to pull this information from the discovery doc.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

err, make an issue.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's in the umbrella issue #26120.

Need a discovery mechanism to determine what resources GC should manage. For example, GC needs to determine if it should watch for extensions/v1beta1/job or batch/v1/job.

}
gc.propagator.eventQueue.Add(&workqueue.TimedWorkQueueItem{StartTime: gc.clock.Now(), Object: event})
},
UpdateFunc: func(oldObj, newObj interface{}) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the old object meta and new object meta have the same owner references, finalizers, and deletiontimestamp, there's no reason to issue the update to the graph builder, right?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right. Currently the check is done in the propagator.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right. Currently the check is done in the propagator.

This is the spot to do it. Here you have one thread per watch and you can run the checks concurrently. In the dependencyGraphBuilder, you've paid an enqueue, a dequeue, and you're single threaded on the consumption side.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ack.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There were some weird cases where the graph didn't already have the node even if it's an update event, thus this line. IMO this should never happen, maybe it's caused by an informer problem. So it's not safe to ignore some of the update events here.
Let's not introduce a possible source of flakes when adding new feature. I'll leave a TODO and try the suggested optimization in a follow up PR.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TODO added.

}

type concurrentUIDToNode struct {
*sync.RWMutex
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These aren't usually pointers. Also, name it please.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

go wait.Until(gb.runProcessGraphChanges, 1*time.Second, stopCh)
}

var ignoredResources = map[schema.GroupVersionResource]struct{}{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TODO, many of these should be discoverable (cannot be deleted)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"TODO: GC should detect if a resource requires its attention via API discovery."

}
for _, c := range changed {
if c.oldRef.BlockOwnerDeletion != nil && *c.oldRef.BlockOwnerDeletion &&
c.newRef.BlockOwnerDeletion != nil && !*c.newRef.BlockOwnerDeletion {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

c.newRef.BlockOwnerDeletion == nil isn't treated as equivalent to false?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice catch. Updated.

func (gb *GraphBuilder) processTransitions(oldObj interface{}, newAccessor meta.Object, n *node) {
if startsWaitingForDependentsOrphaned(oldObj, newAccessor) {
glog.V(5).Infof("add %s to the attemptToOrphan", n.identity)
gb.attemptToOrphan.Add(n)
Copy link
Contributor

@deads2k deads2k Jan 20, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

return at the end of this block to eliminate the else.

if 
   return
}

if {
}

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

@k8s-github-robot
Copy link

[APPROVALNOTIFIER] Needs approval from an approver in each of these OWNERS Files:

We suggest the following people:
cc @derekwaynecarr @fejta
You can indicate your approval by writing /approve in a comment
You can cancel your approval by writing /approve cancel in a comment

@caesarxuchao
Copy link
Member Author

@deads2k @Kargakis comments addressed. PTAL. Thanks.

@k8s-ci-robot
Copy link
Contributor

@caesarxuchao: The following test(s) failed:

Test name Commit Details Rerun command
Jenkins GKE smoke e2e 0fcc48f link @k8s-bot cvm gke e2e test this
Jenkins GCI GKE smoke e2e 0fcc48f link @k8s-bot gci gke e2e test this
Jenkins GCE e2e 0fcc48f link @k8s-bot cvm gce e2e test this
Jenkins verification 0fcc48f link @k8s-bot verify test this
Jenkins GCE Node e2e 0fcc48f link @k8s-bot node e2e test this
Jenkins CRI GCE Node e2e 0fcc48f link @k8s-bot cri node e2e test this
Jenkins kops AWS e2e 0fcc48f link @k8s-bot kops aws e2e test this
Jenkins GCI GCE e2e 0fcc48f link @k8s-bot gci gce e2e test this
Jenkins unit/integration 0fcc48f link @k8s-bot unit test this
Jenkins Kubemark GCE e2e 0fcc48f link @k8s-bot kubemark e2e test this
Jenkins GCE etcd3 e2e 0fcc48f link @k8s-bot gce etcd3 e2e test this
Jenkins Bazel Build 0fcc48f link @k8s-bot bazel test this

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@deads2k
Copy link
Contributor

deads2k commented Feb 3, 2017

@lavalamp there's a lot here. You should probably take a pass.

@caesarxuchao
Copy link
Member Author

@deads2k do you have more comments? Or shall i just ask lavalamp to take another pass?

@deads2k
Copy link
Contributor

deads2k commented Feb 17, 2017

@deads2k do you have more comments? Or shall i just ask lavalamp to take another pass?

I think it's @lavalamp's turn. I'd like another pass through when he's done, but I'm pretty happy with the shape.

return ret
}

func (gc *GarbageCollector) attemptToDeleteItem(item *node) error {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lavalamp this function is the main processing logic. The input the is object added to the queue by the watcher. This function takes different actions for the object that "needs to delete dependents in foreground/background", "needs to orphan dependents", "still has valid owner", or "is waiting for dependents to be deleted".

}

// Dequeueing an event from graphChanges, updating graph, populating dirty_queue.
func (gb *GraphBuilder) processGraphChanges() bool {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lavalamp this function is adapted from processEvent, which builds the dependency graph, adds object to attemptToOrphan or attemptToDelete queue. This function updated to handle objects that need foreground GC.

k8s-github-robot pushed a commit that referenced this pull request Mar 1, 2017
Automatic merge from submit-queue (batch tested with PRs 38676, 41765, 42103, 41833, 41702)

Add synchronous garbage collection

Fix #29891.

Split into five commits:
1. generated: don't need review
2. API: got reviewed in #38678, i addressed @lavalamp's comments there.
3. registry changes: @nikhiljindal could you help take a look?
4. gc changes: reviewed by @deads2k in #38679. It needs another pass.
5. tests: @lavalamp @deads2k could take a look?

TODO:
- [ ] Update doc. Note that the existing doc has been refactored in kubernetes/website#2488.
- [ ] add an admission controller to check if a user can set OwnerReference.BlockOwnerDeletion
- [ ] #38676 (comment)
- [ ] split the unit tests garbagecollector_test.go according to the components tested.
- [ ] try if it's practically safe to use the cached object status in attempToDeleteItem(), after synchronous GC feature is stable. (Also see #38676 (comment))
- [ ] add blockOwnerDeletion for rs adoption #38679 (comment)
- [ ] https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/pr-logs/pull/38676/pull-kubernetes-e2e-gce-etcd3/20101/ (improve the log message)

```release-note
Added foreground garbage collection: the owner object will not be deleted until all its dependents are deleted by the garbage collector. Please checkout the [user doc](https://kubernetes.io/docs/concepts/abstractions/controllers/garbage-collection/) for details.
deleteOptions.orphanDependents is going to be deprecated in 1.7. Please use deleteOptions.propagationPolicy instead.
```
"garbage_collector_orphan_processing_latency_microseconds",
"garbage_collector_event_queue_latency",
"garbage_collector_dirty_queue_latency",
"garbage_collector_orhan_queue_latency",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typo: "orphan"

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's fixed in HEAD.

akhilerm pushed a commit to akhilerm/apimachinery that referenced this pull request Sep 20, 2022
Automatic merge from submit-queue (batch tested with PRs 38676, 41765, 42103, 41833, 41702)

Add synchronous garbage collection

Fix kubernetes/kubernetes#29891.

Split into five commits:
1. generated: don't need review
2. API: got reviewed in #38678, i addressed @lavalamp's comments there.
3. registry changes: @nikhiljindal could you help take a look?
4. gc changes: reviewed by @deads2k in #38679. It needs another pass.
5. tests: @lavalamp @deads2k could take a look?

TODO:
- [ ] Update doc. Note that the existing doc has been refactored in kubernetes/website#2488.
- [ ] add an admission controller to check if a user can set OwnerReference.BlockOwnerDeletion
- [ ] kubernetes/kubernetes#38676 (comment)
- [ ] split the unit tests garbagecollector_test.go according to the components tested.
- [ ] try if it's practically safe to use the cached object status in attempToDeleteItem(), after synchronous GC feature is stable. (Also see kubernetes/kubernetes#38676 (comment))
- [ ] add blockOwnerDeletion for rs adoption kubernetes/kubernetes#38679 (comment)
- [ ] https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/pr-logs/pull/38676/pull-kubernetes-e2e-gce-etcd3/20101/ (improve the log message)

```release-note
Added foreground garbage collection: the owner object will not be deleted until all its dependents are deleted by the garbage collector. Please checkout the [user doc](https://kubernetes.io/docs/concepts/abstractions/controllers/garbage-collection/) for details.
deleteOptions.orphanDependents is going to be deprecated in 1.7. Please use deleteOptions.propagationPolicy instead.
```

Kubernetes-commit: 3afefae02a0588654693de27ef5387c1f9ab5f01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet