Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Waiting for Job completion in InitContainer (or before main container starts) #106802

Closed
collimarco opened this issue Dec 3, 2021 · 54 comments
Closed
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. sig/apps Categorizes an issue or PR as relevant to SIG Apps. sig/node Categorizes an issue or PR as relevant to SIG Node.

Comments

@collimarco
Copy link

What would you like to be added?

A common pattern is to include the release phase (e.g. database migrations) inside a Job.
Then you need to wait until the migration is completed before starting the new pods: you can use an InitContainer for that.

For example if you use Rails you can:

  1. Create a Job (e.g. migration-job) with rails db:migrate command
  2. Inside the Deployment (e.g. web app), add an InitContainer that waits for the condition rake db:abort_if_pending_migrations (this command returns success only when the migrations are complete)

The problem is that not all commands (in step 1) have a corresponding "check" command (in step 2). It can be extremely hard for some commands to find a separate command in step 2 that verifies their completion.

It would be much easier if we could simply wait for a Job to become Complete before starting the pods (in step 2).

Why is this needed?

InitContainers are a great way to wait for a general condition to be met, but it is difficult to check a condition for a Kubernetes component (like a Job).

One of the most common conditions is a Job completion, but currently there's no way to express that in Kubernetes (e.g. waitFor: migration-job). This could be something similar to what kubectl wait does on the client side, but inside the spec.

@collimarco collimarco added the kind/feature Categorizes issue or PR as related to a new feature. label Dec 3, 2021
@k8s-ci-robot k8s-ci-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Dec 3, 2021
@k8s-ci-robot
Copy link
Contributor

@collimarco: This issue is currently awaiting triage.

If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Dec 3, 2021
@neolit123
Copy link
Member

/sig apps node

@k8s-ci-robot k8s-ci-robot added sig/apps Categorizes an issue or PR as relevant to SIG Apps. sig/node Categorizes an issue or PR as relevant to SIG Node. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Dec 6, 2021
@pacoxu
Copy link
Member

pacoxu commented Dec 8, 2021

The current workaround solution is Fine Parallel Processing Using a Work Queue.

If using init container, https://github.com/groundnuty/k8s-wait-for is a choice to wait for the job completion.

Not sure if I misunderstand. The expected solution here is adding waitFor to jobs.specs.

If so, we should not schedule the pod before the job completion. In that case, we should include sig-scheduling for this feature, and the scheduler will manage the waiting logic in job controller.

@alculquicondor
Copy link
Member

This sounds rather like the opposite.

To hold the start of a Deployment until an existing Job finishes. I think you could use a pipeline framework for this, like kubeflow-pipelines or argo. I don't think the need is generic enough to fit in k8s.

@collimarco
Copy link
Author

I don't think the need is generic enough to fit in k8s.

Actually most of the examples for InitContainer in the docs are already for checking / waiting for something, so I think it can be a common need.

Also consider that Heroku has "Release Phase", meaning that migrations between releases are a common need: it seems something that Deployment (an high level construct for apps) may want to support in the future.

Deployment already has the concept of multiple versions of the app running at the same time, so it would be interesting to have something to facilitate the transition between the two app versions (i.e. reference a migration Job that must be completed before starting the new pods)

@alculquicondor
Copy link
Member

Have you looked into kubeflow-pipelines, Argo or Tekton?

Let me rephrase: I think this need can be done in the application level, rather than being part of core kubernetes. There are plenty of frameworks (listed above) which provide the functionality.

@collimarco
Copy link
Author

@alculquicondor I have been searching for months for a clean solution (do you have any useful links?) and the best solution that I have found is to run a migration Job and wait for its completion in the InitContainers. Other solutions don't work very well (e.g. require downtime, the environment is not yet updated and the migration runs in the "previous" environment, etc.).

The solution with Job + InitContainers works perfectly until you have an app-specific command that can verify the completion of the migration (like rake db:abort_if_pending_migrations)... However, since you don't always have that app-specific command, it would be useful to have a generic command that can wait for the Job completion.

Also a Deployment is part of apps/v1, so it seems the right place for common app needs (release phase or db migration).

@alculquicondor
Copy link
Member

alculquicondor commented Dec 14, 2021

Note that waiting in a initContainer is potentially wasteful (the nodes are allocated with the pod resources, but nothing is actually running on them).

I think the ideal solution is that the Deployment is not created until the Job finishes. You could achieve this from the Job itself (by using the k8s API to create the deployment). But the frameworks I already mentioned focus on this exact problem. Probably Argo fits this use case better https://argoproj.github.io/

@collimarco
Copy link
Author

@alculquicondor It doesn't seem like a waste waiting for a migration to complete for a few minutes... The pods would be there in any case for running the application.

Yes, I know about Argo, I meant if you have a specific link for database migrations. As I already said I made extensive research on the topic and it seems that most Rails applications use the strategy that I described above.

@alculquicondor
Copy link
Member

I think an Argo workflow solves the problem elegantly, from an API point of view. I'm not in the domain of database migrations, so I don't know about people doing this exact thing with Argo.

But my point is that there seems to be a solution and, thus, not enough justification to bake this into the Deployment/ReplicaSet API. But, I'm not an SIG Apps approver. If you feel strongly about it, you can join the SIG meetings to ask for feedback.

@collimarco
Copy link
Author

From the official K8s documentation:

Because init containers run to completion before any app containers start, init containers offer a mechanism to block or delay app container startup until a set of preconditions are met. Once preconditions are met, all of the app containers in a Pod can start in parallel.
https://kubernetes.io/docs/concepts/workloads/pods/init-containers/

It seems exactly my use case: I'm using them to wait for a condition (migration completed).

I'm not in the domain of database migrations, so I don't know about people doing this exact thing with Argo.

Ok, can you tag someone of your team with this specific knowledge?

@alculquicondor
Copy link
Member

@kow3ns ?

@denkensk
Copy link
Member

@collimarco
Job example in argo-workflows may help you.
https://github.com/argoproj/argo-workflows/blob/master/examples/k8s-jobs.yaml

@collimarco
Copy link
Author

@denkensk Yes, that is a normal Job defined in Argo Workflows...

However you are defining a sequence of imperative commands and you can't use an entirely declarative approach. On the other hand, using kubectl apply -f my-entire-app.yml with migration Job+InitContainer you can use a declarative approach, one command and only based on Kubernetes.

For example one of the drawbacks of Argo Workflows approach is that you don't have the updated environment because you run the migration before the deploy. On the other hand, using kubectl apply + InitContainer to wait, the Job runs in the updated environment.

@denkensk
Copy link
Member

denkensk commented Dec 16, 2021

@collimarco In my personal opinion, I think the DAG operator like argo-workflow is more general than InitContainer. Migration + Deploy is just two stages. It's easy to use initContainer to control the order.

But based on my actual experience with Argo-Workflow in a production environment, the graphics may be more complex, there will be at least 5 or more stages. If we chose to use InitContainer to do this, we will face some issues below:

  1. Cluster resources are limited. We create all the pods at once. We can't control the order of the pods scheduled. If the stage2 pod + stage4 pod are scheduled first. It will occur to a dead lock.

  2. We can't dynamically choose what jobs to send in the next stage based on the results of the previous stage

@collimarco
Copy link
Author

Cluster resources are limited

Pods are replaced by rolling updates managed by Deployments, so, using my strategy, you don't have more pods and you don't have resource problems... It's just like a normal rolling update.

at least 5 or more stages

Many apps can run on a simple PaaS like Heroku and they don't need so many stages... For example Heroku has only 1 stage/command in the release phase. It would be interesting to have a strategy inside Kubernetes - which is becoming the standard - for this common use case.

@alculquicondor
Copy link
Member

I suggest you join a SIG Apps meeting to discuss your suggestion.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 4, 2022
@collimarco
Copy link
Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 4, 2022
@alculquicondor
Copy link
Member

Did you ever bring this up to the SIG Apps meeting?

Also @imjasonh, this kind of sounds similar to what you were talking about at WG Batch

@patsevanton
Copy link

any update?

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 6, 2023
@collimarco
Copy link
Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 6, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 7, 2023
@JuryA
Copy link

JuryA commented May 20, 2023

it would be nice to have this feature native to k8s init container.

Why? What about this suggestion #106802 (comment)?

I would like to respond to your recent comments, which I find inappropriate and incorrect. I firmly believe that we need this functionality directly within Kubernetes, rather than relying on external tools like ArgoCD. It is evident that your lack of understanding and unwillingness to acknowledge the need for this functionality demonstrates a probable lack of practical experience in a production environment. The responses from other users clearly confirm this need, yet you persistently ask "why?" without grasping the concept.

DevOps engineers with extensive experience in the world of Kubernetes understand the challenges associated with waiting for the completion of dependent tasks or migrations. They recognize the importance of having this functionality directly within the core of Kubernetes. Your rejection of this idea and attempts to redirect us to other tools are out of touch with reality and do not support progress and the simplicity of application management on Kubernetes.

Implementing this feature would provide significant added value and streamline the work of many teams dealing with the complexities of dependency management and migrations. It is crucial to listen to the needs of the community and add features that are genuinely useful and necessary. Furthermore, other users have already come up with various ways to address this problem, which demonstrates a genuine demand for a native feature to wait for task completion in Kubernetes.

I assume that your inquiries regarding this topic stem from a desire to improve Kubernetes and contribute to its further development. Therefore, I kindly request that you reconsider your stance, take into account the feedback from other users, and consider the benefits this functionality would bring to the broader community.

Thank you for your understanding.

@alculquicondor
Copy link
Member

It is crucial to listen to the needs of the community and add features that are genuinely useful and necessary.

This is an open source project. Everyone is welcome to come with a proposal, share, discuss it, etc. Sometimes it's not enough to say "I need this feature". What are the complete semantics of the feature? Why are alternatives not enough? This is all encompassed in the KEP process: https://github.com/kubernetes-sigs/kueue/tree/main/keps/NNNN-template

As I have already stated it: if you believe this is a needed feature, share a proposal, come to a SIG meeting, and we'll guide you through the process.
If no maintainer or long time contributor has offered to design or implement the feature, it's likely because they (me included) don't have enough bandwidth to do so. And there are valid alternatives out there that can be built on top of k8s.

@JuryA
Copy link

JuryA commented Jun 3, 2023

@alculquicondor Here is the KEP Draft based on the provided template. Can you make a review? I'm also very busy (and also I'm more DevOps than Developer) so I cannot contribute code, but hopefully ... I found some spare time to prepare this draft (it's my first KEP so it will need more work anyway). Maybe other guys from this thread (e.g. @collimarco, @iocanel, @Tensho, @rahul-sharma-78) will help to move it further 😉:


KEP-NNNN: Waiting for required Pod(s) to be Running/Succeeded before dependent Pod starts

Summary

This KEP proposes the addition of a new functionality in Kubernetes that allows pods to wait for specific pods to reach the Running or Succeeded state before starting. This feature is crucial for ensuring proper sequencing and dependencies between pods in a Kubernetes cluster.

Motivation

In certain scenarios, a pod must wait for one or more prerequisite pods to be in the Running or Succeeded state before it can start. This functionality is currently missing in Kubernetes, leading to challenges in managing dependencies and sequencing of pods. This KEP addresses this limitation and provides a robust mechanism for handling dependencies between pods.

Goals

  • Introduce a new functionality that enables pods to wait for required pods to reach the Running or Succeeded state.
  • Ensure proper sequencing and dependency management between pods.
  • Improve the overall reliability and stability of Kubernetes workloads.

Non-Goals

  • This proposal does not aim to introduce changes to the core scheduling or resource allocation mechanisms in Kubernetes.

Proposal

The proposed solution involves adding a new field to the pod specification that allows users to specify the pods on which a dependent pod should wait. This field can accept multiple pod selectors or pod names, allowing for flexible configuration.

User Stories (Optional)

Story 1

As a Kubernetes user, I want to deploy a pod that relies on the successful completion of a specific job pod. I should be able to define the dependency between these pods so that my dependent pod starts only when the job pod has been successfully completed.

Notes/Constraints/Caveats (Optional)

  • The waiting mechanism should handle various scenarios, including waiting for multiple pods or waiting for a specific condition on the prerequisite pods.
  • Error handling and timeout mechanisms should be considered to handle cases where the prerequisite pods do not reach the required state within a specified time frame.

Risks and Mitigations

  • There may be a slight increase in the complexity of pod startup and scheduling logic, which could impact the overall system performance. Proper testing and optimization will be required to mitigate this risk.

Design Details

Implementation Proposal

The proposed implementation involves modifying the Kubernetes scheduler and pod startup logic to accommodate the waiting functionality. When a dependent pod is scheduled, the scheduler checks the specified prerequisite pods' status and waits until they reach the Running or Succeeded state before starting the dependent pod.

Test Plan

Prerequisite testing updates

The existing unit tests and integration tests for pod scheduling and startup logic need to be updated to cover the new waiting functionality.

Unit Tests

  • Write unit tests to ensure proper parsing and handling of the new pod specification field.
  • Test different scenarios, such as waiting for multiple pods, waiting for a specific condition, and handling timeouts.

Integration Tests

  • Create integration tests to validate the functionality end-to-end, including pod scheduling, waiting, and startup.

Graduation Criteria

The proposed functionality will go through the Kubernetes enhancement process and will be considered graduated when it has been implemented, tested, and reviewed by the Kubernetes community.

Implementation History

  • [2023-06-03] Initial proposal drafted.
  • [pending] Community feedback gathered and incorporated.
  • [pending] Implementation work started.
  • [pending] Pull request submitted for review.

Drawbacks

  • The introduction of this new functionality may add complexity to the Kubernetes codebase and increase the learning curve for new users.
  • The waiting mechanism could introduce delays in pod startup, impacting overall workload performance.

Alternatives

Delay job start instead of workload admission

Instead of delaying the admission of dependent workloads, an alternative approach could be to delay the start of the job itself until all prerequisite pods are in the required state.

Pod Resource Reservation

Another alternative is to introduce a pod resource reservation mechanism, where pods can reserve resources and wait for them to become available before starting. This approach may require changes to the Kubernetes scheduler and resource allocation logic.

More granular configuration to enable the mechanism

Instead of a single field for specifying prerequisite pods, a more granular configuration option could be provided to enable the waiting mechanism. This would allow users to define custom conditions and dependencies based on their specific requirements.

@JuryA
Copy link

JuryA commented Jun 4, 2023

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 4, 2023
@kerthcet
Copy link
Member

kerthcet commented Jun 5, 2023

FYI @JuryA you should post your KEP under https://github.com/kubernetes/enhancements, it's more convenient for discussion, you can follow the guide here https://github.com/kubernetes/enhancements#when-to-create-a-new-enhancement-issue
Or a google doc for predesign.

@alculquicondor
Copy link
Member

For anyone looking to proceed with a KEP, I would like to see the following questions answered:

  • What does the API look like? How do we reference the Pod(s) to watch for?
  • Can it be more than one Pod?
  • What happens if no Pod is found?
  • What happens if the pre-requisite Pods fails or disappears? Does the dependent Pod keep waiting or does it fail?
  • When waiting for Pods "running", what happens if the running pod fails and we have already started the dependent pod?
  • Running != Ready, so worth asking which state do you really want to match.
  • When waiting for Pods "succeeded", how do you ensure that the prerequisite Pod still exists and is not garbage-collected?
  • And the most important question: Why does this have to be supported in core k8s, instead of a workflow controller built on top. What are the current challenges of using the existing workflow frameworks? Can we (instead) do something in core k8s to make it easier to write workflow frameworks?

Also, it looks like you abandoned the idea of waiting in an init container. Is this correct? This is probably good, but worth listing in alternatives.
Also abandoned the idea of waiting for Jobs to complete?

@JuryA
Copy link

JuryA commented Jun 7, 2023

FYI @JuryA you should post your KEP under https://github.com/kubernetes/enhancements, it's more convenient for discussion, you can follow the guide here https://github.com/kubernetes/enhancements#when-to-create-a-new-enhancement-issue

Or a google doc for predesign.

Thank you for your guidance. I will incorporate the missing points and relocate it to the proposed location.

@marianhlavac
Copy link

It would be great to see this KEP posted and implemented, I'd definitely show support for such feature.

@dnhandy
Copy link

dnhandy commented Dec 20, 2023

@JuryA did you get that posted to the kubernetes/enhancements project? Do you have a direct link? I'd really like to follow this, as one more person who would really like to see this done.

@alculquicondor I don't think anyone is arguing that the InitContainer approach is what we want... we're saying it's the ugly alternative we have to live with because there's nothing native. I'd much rather see it not even schedule the pod until the precondition is met (much like it does when the pod needs a PVC). Your list of question is valid and all should be ironed out in the KEP, but, as several people have pointed out, this is a near-ubiquitous need, and not having answers to all the questions doesn't mean the request isn't valid. As someone who has a CI pipeline built outside argo, being told k8s doesn't support my extremely common use case unless I migrate my entire CI pipeline to a third-party offering really makes k8s look like an incomplete solution.

@alculquicondor
Copy link
Member

and not having answers to all the questions doesn't mean the request isn't valid

I didn't say it's not.
But in order to proceed with the feature approval, you need all those questions answered. And commitment to implement it.

being told k8s doesn't support my extremely common use case unless I migrate my entire CI pipeline to a third-party offering really makes k8s look like an incomplete solution.

You need to think of k8s as an operating system kernel. Hence why one of the questions is how we can make writing workflow managers easier, instead of implementing a full blown workflow manager.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 26, 2024
@csviri
Copy link

csviri commented Apr 12, 2024

Hi,
pls take a look at this project, the Glue custom resource is meant to solve such use cases:
https://github.com/csviri/kubernetes-glue-operator?tab=readme-ov-file#the-glue-resource

@AnthonyDewhirst
Copy link

Can we get the link to the KEP posted here please?

@xeor
Copy link

xeor commented May 6, 2024

Did anyone make the KEP for this? Anyone have a link? @JuryA ?

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 5, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Jul 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. sig/apps Categorizes an issue or PR as relevant to SIG Apps. sig/node Categorizes an issue or PR as relevant to SIG Node.
Projects
Archived in project
Development

No branches or pull requests