Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubectl apply (client-side) removes all entries when attempting to remove a single duplicated entry in a persisted object #58477

Open
tmszdmsk opened this issue Jan 18, 2018 · 32 comments · May be fixed by #125932
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery.

Comments

@tmszdmsk
Copy link

tmszdmsk commented Jan 18, 2018

Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug

Lists in API objects can define a named property that should act as a "merge key". The value of that property is expected to be unique for each item in the list. However, gaps in API validation allow some types to be persisted with multiple items in the list sharing the same value for a mergeKey property.

The algorithm used by kubectl apply detects removals from a list based on the specified key, and communicates that removal to the server using a delete directive, specifying only the key. When duplicate items exist, that deletion directive is ambiguous, and the server implementation deletes all items with that key.

Known API types/fields which define a mergeKey but allow duplicate items to be persisted:

PodSpec (affects all workload objects containing a pod template):

Service

Original report

===

What happened:
For deployment resource:

A container has defined environment variable with name x that is duplicated (there are two env vars with the same name, the value is also the same).

When you fix the deployment resource descriptor so that environment variable with name x appears only once and push it with kubectl apply, deployment with no environment variable named x is created and therefore no environment variable named x is passed to replica set and pods.

What you expected to happen:
After fixing the deployment, environment variable with name x is defined in the deployment once .

How to reproduce it (as minimally and precisely as possible):

  1. create deployment with container with duplicated environment variable
  2. kubectl apply it
  3. fix deployment removing one of duplicated environment variable definitions
  4. kubectl apply it
  5. kubectl get deployment/your-deployment -o yaml prints deployment without

Anything else we need to know?:
nope

Environment:

  • Kubernetes version (use kubectl version):
    Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.1", GitCommit:"3a1c9449a956b6026f075fa3134ff92f7d55f812", GitTreeState:"clean", BuildDate:"2018-01-04T20:00:41Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.1", GitCommit:"3a1c9449a956b6026f075fa3134ff92f7d55f812", GitTreeState:"clean", BuildDate:"2018-01-04T11:40:06Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
  • Cloud provider or hardware configuration: private Kubernetes cluster
  • OS (e.g. from /etc/os-release): N/A
  • Kernel (e.g. uname -a): N/A
  • Install tools: N/A
  • Others: N/A
@k8s-ci-robot k8s-ci-robot added needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. kind/bug Categorizes issue or PR as related to a bug. labels Jan 18, 2018
@tmszdmsk
Copy link
Author

@kubernetes/sig-api-machinery-bugs

@k8s-ci-robot k8s-ci-robot added sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Jan 18, 2018
@k8s-ci-robot
Copy link
Contributor

@tmszdmsk: Reiterating the mentions to trigger a notification:
@kubernetes/sig-api-machinery-bugs

In response to this:

@kubernetes/sig-api-machinery-bugs

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@liggitt
Copy link
Member

liggitt commented Jan 18, 2018

@mengqiy likely due to strategic patch computation sending a "remove x" patch

@liggitt
Copy link
Member

liggitt commented Jan 18, 2018

the envvar name is supposed to be the unique key of items in the list, yet the apiserver allowed a duplicate to be persisted in the first place. that's likely the cause of the bug

@lavalamp
Copy link
Member

It sounds like the validation is inconsistent with the schema's merge key. It should either not construct SMP with the env name as a key, or it shouldn't let you specify the same env var twice.

@jennybuckley would you like to look at the validation to see if it is doing the right thing? Is it intentional that people can put the same var in the list multiple times?

@yue9944882
Copy link
Member

yue9944882 commented Feb 8, 2018

I believe it's related with #59119. They might be the same issue.

This issue can be fixed by #46161.

@yue9944882
Copy link
Member

Otherwise, to prevent the inconsistency proactively, How about having a GET request to decide whether respect the last-configuration-annotation?

@jennybuckley
Copy link

jennybuckley commented Feb 8, 2018

@yue9944882
I think that PR could fix this, but that PR has been unmerged for 9 months now.
Also isn't very clear to me why we should be allowing multiple definitions of the same environment variable anyway. I think #59593 could fix this in the short term

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 9, 2018
@tmszdmsk
Copy link
Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 10, 2018
@redbaron
Copy link
Contributor

This is most likely due to strategic merge patch not handling duplicated keys correctly, see #65106

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 13, 2018
@jethrogb
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 13, 2018
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 12, 2018
@redbaron
Copy link
Contributor

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 12, 2018
@kiyutink
Copy link

Is there a workaround for this that doesn't involve deleting the deployment?

@jethrogb
Copy link

Use edit, patch, or replace, instead of apply.

@MikkCZ
Copy link

MikkCZ commented Aug 27, 2020

@kiyutink for me there was.

solution was to kubectl apply once more. Then it got all fixed and the env variable appeared back as defined in the manifest.

@zhangguanzhang
Copy link

zhangguanzhang commented Aug 13, 2021

same issue with 1.20.6

kubectl version -o json
{
  "clientVersion": {
    "major": "1",
    "minor": "20",
    "gitVersion": "v1.20.6",
    "gitCommit": "8a62859e515889f07e3e3be6a1080413f17cf2c3",
    "gitTreeState": "clean",
    "buildDate": "2021-04-15T03:28:42Z",
    "goVersion": "go1.15.10",
    "compiler": "gc",
    "platform": "linux/amd64"
  },
  "serverVersion": {
    "major": "1",
    "minor": "20",
    "gitVersion": "v1.20.6",
    "gitCommit": "8a62859e515889f07e3e3be6a1080413f17cf2c3",
    "gitTreeState": "clean",
    "buildDate": "2021-04-15T03:19:55Z",
    "goVersion": "go1.15.10",
    "compiler": "gc",
    "platform": "linux/amd64"
  }
}

I found the problem, after I remove the duplicateEnvVars, It works

@nadavbuc
Copy link

nadavbuc commented Jul 20, 2023

Just happened to me on both 1.25 and 1.26
Duplicated configmap for which key and name were changed + duplication removed on same apply - the env var is completely absent after applying

I tried editing the deployment and deleting just the duplicated var without modifying the existing one - but same result - both got completely removed

ORIGINAL:

GENERAL_STATSD_HOST:   <set to the key 'host' of config map 'general.statsd'>    Optional: false
GENERAL_STATSD_HOST:   <set to the key 'host' of config map 'general.statsd'>    Optional: false

EXPECTED:

GENERAL_STATSD_HOST:   <set to the key 'yard.host' of config map 'general.statsd-k8s'>    Optional: false

ACTUAL


Resolved after re-applying

@liggitt liggitt changed the title kubectl apply removes all entries when attempting to remove a single duplicated entry in a persisted object kubectl apply (client-side) removes all entries when attempting to remove a single duplicated entry in a persisted object Nov 30, 2023
@issssu
Copy link

issssu commented May 6, 2024

same issue with 1.25.3.
Why we need envs with same keys? Can we delete same keys automatically?
The warning may ignored, then next time, I delete this dulplicate key, but kubernetes delete all envs(with same keys), this will lead to serious accidents.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery.
Projects
None yet