Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubectl doesn't patch removing serviceAccountName from deployment #108208

Closed
mkrutik opened this issue Feb 18, 2022 · 11 comments
Closed

kubectl doesn't patch removing serviceAccountName from deployment #108208

mkrutik opened this issue Feb 18, 2022 · 11 comments
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. sig/cli Categorizes an issue or PR as relevant to SIG CLI. triage/accepted Indicates an issue or PR is ready to be actively worked on. wg/api-expression Categorizes an issue or PR as relevant to WG API Expression.

Comments

@mkrutik
Copy link

mkrutik commented Feb 18, 2022

What happened?

kubectl apply -f doesn't patch removed serviceAccountName from the Deployment manifest.

What did you expect to happen?

Removed serviceAccountName from Deployment must be applied using kubectl apply -f deployment.yaml command. (or at least show some warning that it's not applied due to some reasons)

How can we reproduce it (as minimally and precisely as possible)?

  1. Create Deployment manifest with any serviceAccountName within and apply it.
  2. Remove serviceAccountName field from Deployment and apply it via kubectl apply -f command (use -v=9 to see req/resp logs).
  3. Use describe or get command to get that Deployment and you should find that serviceAccountName still exists.

Anything else we need to know?

No response

Kubernetes version

Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.4", GitCommit:"e6c093d87ea4cbb530a7b2ae91e54c0842d8308a", GitTreeState:"clean", BuildDate:"2022-02-16T12:30:48Z", GoVersion:"go1.17.6", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.3", GitCommit:"c92036820499fedefec0f847e2054d824aea6cd1", GitTreeState:"clean", BuildDate:"2021-10-27T18:35:25Z", GoVersion:"go1.16.9", Compiler:"gc", Platform:"linux/amd64"}

Cloud provider

-

OS version

Darwin macbook-pro.home 21.2.0 Darwin Kernel Version 21.2.0: Sun Nov 28 20:28:54 PST 2021; root:xnu-8019.61.5~1/RELEASE_X86_64 x86_64

Install tools

`kubectl` installed via `brew`

Container runtime (CRI) and and version (if applicable)

-

Related plugins (CNI, CSI, ...) and versions (if applicable)

-
@mkrutik mkrutik added the kind/bug Categorizes issue or PR as related to a bug. label Feb 18, 2022
@k8s-ci-robot k8s-ci-robot added needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Feb 18, 2022
@mkrutik
Copy link
Author

mkrutik commented Feb 18, 2022

/sig cli

@k8s-ci-robot k8s-ci-robot added sig/cli Categorizes an issue or PR as relevant to SIG CLI. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Feb 18, 2022
@ardaguclu
Copy link
Member

/triage accepted
/sig api-machinery

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Feb 21, 2022
@knight42
Copy link
Member

@mkrutik kubectl apply did remove the serviceAccountName field, but it didn't touch another similar but already deprecated field serviceAccount, so the serviceAccountName field seems to be unchanged.(You could verify this by running kubectl edit, only by removing the serviceAccountName and serviceAccount field at the same time could you unset the service account).

As a workaround, I think you could try kubectl replace which updates instead of patching the deployment.

@ardaguclu
Copy link
Member

@knight42 is right, this issue is due to the deprecated field is overriding it. I think, best thing is to use replace in that case.

/remove-triage accepted

@k8s-ci-robot k8s-ci-robot added needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. and removed triage/accepted Indicates an issue or PR is ready to be actively worked on. labels Feb 23, 2022
@fedebongio
Copy link
Contributor

/assign @apelisse @jpbetz
could either of you take a look / reassign please?
/triage accepted

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Feb 24, 2022
@apelisse
Copy link
Member

/wg api-expression

@k8s-ci-robot k8s-ci-robot added the wg/api-expression Categorizes an issue or PR as relevant to WG API Expression. label Feb 28, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 29, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 28, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@zswanson
Copy link

For anyone stumbling into this issue, its effectively a dupe of #72519 which was closed with a comment indicating this is somewhat intentional.

serviceAccountName is defaulted from serviceAccount for backwards compatibility. If you want to remove the fields you must set both to empty explicitly. It is not possible to leave one field unset.

meatballhat added a commit to rstudio/helm that referenced this issue Nov 7, 2022
which *must be done* according to an amazing thread of issues reported
by humans bitten by this intentional inconsistency in kubernetes "apply"
behavior 🎉

As referenced in the body of this change, one may trace back through
this gotcha from this issue comment:

kubernetes/kubernetes#108208 (comment)

and also see this mention in the kubernetes docs in a "Note" callout
near the end of this section:

https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-multiple-service-accounts
colearendt pushed a commit to rstudio/helm that referenced this issue Nov 28, 2022
which *must be done* according to an amazing thread of issues reported
by humans bitten by this intentional inconsistency in kubernetes "apply"
behavior 🎉

As referenced in the body of this change, one may trace back through
this gotcha from this issue comment:

kubernetes/kubernetes#108208 (comment)

and also see this mention in the kubernetes docs in a "Note" callout
near the end of this section:

https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-multiple-service-accounts
colearendt pushed a commit to rstudio/helm that referenced this issue Nov 28, 2022
which *must be done* according to an amazing thread of issues reported
by humans bitten by this intentional inconsistency in kubernetes "apply"
behavior 🎉

As referenced in the body of this change, one may trace back through
this gotcha from this issue comment:

kubernetes/kubernetes#108208 (comment)

and also see this mention in the kubernetes docs in a "Note" callout
near the end of this section:

https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-multiple-service-accounts
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. sig/cli Categorizes an issue or PR as relevant to SIG CLI. triage/accepted Indicates an issue or PR is ready to be actively worked on. wg/api-expression Categorizes an issue or PR as relevant to WG API Expression.
Projects
None yet
Development

No branches or pull requests

9 participants