Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow probes to run on a more granular timer. #76951

Open
markusthoemmes opened this issue Apr 23, 2019 · 7 comments

Comments

@markusthoemmes
Copy link

@markusthoemmes markusthoemmes commented Apr 23, 2019

What would you like to be added:

For posterity:

type Probe struct {
	InitialDelaySeconds int32
	TimeoutSeconds int32
	PeriodSeconds int32
	...
}

Probes today take all their specified timeouts and delays in seconds. I feel like that arbitrarily sets a lower bound on pod readiness of at least 1 second. For usual operations of K8s workloads, that's all fine. When looking at serverless workloads, where a quick startup of new containers is key and directly reflected in the end user's latency.

Can these be Duration instead, so the user can define the granularity like she pleases?

Why is this needed:

To make workloads with very sensitive startup latency (like serverless workloads) easier to implement on Kubernetes.


Sorry if this is a duplicate, I haven't found a similar issue.

@evankanderson

This comment has been minimized.

Copy link

@evankanderson evankanderson commented Apr 24, 2019

@kubernetes/sig-node-feature-requests

@k8s-ci-robot k8s-ci-robot added sig/node and removed needs-sig labels Apr 24, 2019
@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

@k8s-ci-robot k8s-ci-robot commented Apr 24, 2019

@evankanderson: Reiterating the mentions to trigger a notification:
@kubernetes/sig-node-feature-requests

In response to this:

@kubernetes/sig-node-feature-requests

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@liggitt

This comment has been minimized.

Copy link
Member

@liggitt liggitt commented Apr 24, 2019

If a more granular duration field was added, the definition of the field should include the units, not tie a field to golang duration parsing. c.f. https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md

@fejta-bot

This comment has been minimized.

Copy link

@fejta-bot fejta-bot commented Jul 24, 2019

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@fejta-bot

This comment has been minimized.

Copy link

@fejta-bot fejta-bot commented Jul 24, 2019

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@fejta-bot

This comment has been minimized.

Copy link

@fejta-bot fejta-bot commented Aug 23, 2019

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@dgerd

This comment has been minimized.

Copy link

@dgerd dgerd commented Sep 5, 2019

/remove-lifecycle rotten

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
6 participants
You can’t perform that action at this time.