Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

StatefulSet without Service #69608

Closed
pohly opened this issue Oct 10, 2018 · 49 comments
Closed

StatefulSet without Service #69608

pohly opened this issue Oct 10, 2018 · 49 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. sig/apps Categorizes an issue or PR as relevant to SIG Apps.

Comments

@pohly
Copy link
Contributor

pohly commented Oct 10, 2018

Is this a BUG REPORT or FEATURE REQUEST?:

/kind feature

What happened:

Developers have started to use StatefulSet without a Service definition as a way to deploy a pod on a cluster exactly once, for example here: https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver/blob/master/deploy/kubernetes/stable/controller.yaml

Conceptually that makes sense when the app running in the pod doesn't accept any connections from outside. In this example, it's the CSI sidecar containers which react to changes in the apiserver.

But is this a supported mode of operation for a StatefulSet? https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.12/#statefulset-v1-apps says that

serviceName is the name of the service that governs this StatefulSet. This service must exist before the StatefulSet, and is responsible for the network identity of the set. Pods get DNS/hostnames that follow the pattern: pod-specific-string.serviceName.default.svc.cluster.local where "pod-specific-string" is managed by the StatefulSet controller.

serviceName is mandatory. The example above gets around this by setting serviceName without defining a corresponding Service.

The concern (raised by @rootfs in ceph/ceph-csi#81) is that while it currently works, future revisions of the StatefulSet controller might not support this anymore.

What you expected to happen:

Clarify in the docs that the Service definition is not needed or (better?) make serviceName optional.

Environment:

  • Kubernetes version (use kubectl version): tested on 1.11 and 1.12
@k8s-ci-robot k8s-ci-robot added kind/feature Categorizes issue or PR as related to a new feature. needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Oct 10, 2018
@pohly
Copy link
Contributor Author

pohly commented Oct 10, 2018

/sig apps

@k8s-ci-robot k8s-ci-robot added sig/apps Categorizes an issue or PR as relevant to SIG Apps. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Oct 10, 2018
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 8, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 7, 2019
@pohly
Copy link
Contributor Author

pohly commented Feb 7, 2019

/remove-lifecycle stale

Still a valid question...

@kfox1111
Copy link

kfox1111 commented Feb 8, 2019

+1. I'd like to know this too. Having statefulsets without a service makes sense to me if they don't care about a stable identity, just that there is no more then 1.

@pohly
Copy link
Contributor Author

pohly commented Feb 8, 2019

/remove-lifecycle rotten

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 9, 2019
@pohly
Copy link
Contributor Author

pohly commented May 9, 2019

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 9, 2019
@Noah-Huppert
Copy link

Would a headless service be the solution to this problem?

Kubernetes Docs:

Sometimes you don’t need load-balancing and a single Service IP. In this case, you can create what are termed “headless” Services, by explicitly specifying "None" for the cluster IP (.spec.clusterIP).

@pohly
Copy link
Contributor Author

pohly commented Jul 14, 2019 via email

@kfox1111
Copy link

What is the status of this?

@marcelloromani
Copy link

@Noah-Huppert Yes, a headless service was the solution for me. From what I understand it's the only workaround at the moment.

@kfox1111
Copy link

I've launched statefulsets without services and it seems to work ok. We're just looking for a guarantee that this will continue to be safe going forward. The docs say its required, but the implementation does not enforce it last I looked.

@kow3ns
Copy link
Member

kow3ns commented Sep 12, 2019

We will always support this. You can close this issue. Please feel free to open an issue against the docs repository, or (if you are feeling generous) a PR against the same.

@kfox1111
Copy link

From sig-apps channel:
@kowens
@kfox1111 if you don’t need DNS for the Pod it is supported

@kowens
the reason the docs are written as is is for conceptually simplicity. You do need a Service if you want to generate a CNAME and SRV records for the Pods
it is the most common use case

So, they won't break serviceless statefulsets.

@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 1, 2021
@TBBle
Copy link

TBBle commented Jun 2, 2021

/remove-lifecycle rotten

#69608 (comment) still seems to be the state of this issue.

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jun 2, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 31, 2021
@TBBle
Copy link

TBBle commented Aug 31, 2021

At this point the two CSI examples from the original ticket have gone away (see kubernetes-sigs/gcp-compute-persistent-disk-csi-driver#521 and ceph/ceph-csi#414 which replaced the StatefulSet singletons with Deployments using leader-election).

Are there any practical/current use-cases for StatefulSet without a backing Service? The ticket's approaching three years old, and has not yet been triaged, suggesting there's not much appetite/motivation to advanced the proposed change in #69608 (comment).

@pohly
Copy link
Contributor Author

pohly commented Aug 31, 2021

This is still relevant, for example here: https://github.com/kubernetes-csi/csi-driver-host-path/blob/c480b671f63c142defd2180a6ca68f85327c331f/deploy/kubernetes-1.21/hostpath/csi-hostpath-plugin.yaml#L189-L199

The Service referenced there never gets created because this issue clarified that this is okay.

@TBBle
Copy link

TBBle commented Aug 31, 2021

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 31, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 29, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 29, 2021
@kfox1111
Copy link

/remove-lifecycle stale

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@TBBle
Copy link

TBBle commented Jan 29, 2022

/reopen

#69608 (comment) intended to keep this ticket active, but named the wrong lifecycle, I assume by accident.

@k8s-ci-robot
Copy link
Contributor

@TBBle: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

#69608 (comment) intended to keep this ticket active, but named the wrong lifecycle, I assume by accident.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@djschny
Copy link

djschny commented Jan 29, 2022

/reopen

@k8s-ci-robot
Copy link
Contributor

@djschny: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. sig/apps Categorizes an issue or PR as relevant to SIG Apps.
Projects
None yet
Development

No branches or pull requests