Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubectl: error: open .lock: no such file or directory #118564

Open
netikras opened this issue Jun 8, 2023 · 8 comments
Open

kubectl: error: open .lock: no such file or directory #118564

netikras opened this issue Jun 8, 2023 · 8 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. sig/cli Categorizes an issue or PR as relevant to SIG CLI.

Comments

@netikras
Copy link

netikras commented Jun 8, 2023

What happened?

Yesterday this command was working perfectly. Today it's not

$ kubectl config use-context app-prod
error: open .lock: no such file or directory
$ kubectl config use-context app-prod -v9
I0608 10:49:03.876823 1176967 loader.go:372] Config loaded from file:  /home/user/.kube/app-test.config.yml
I0608 10:49:03.877015 1176967 loader.go:372] Config loaded from file:  /home/user/.kube/app2-test.config.yml
I0608 10:49:03.877176 1176967 loader.go:372] Config loaded from file:  /home/user/.kube/app-prod.config.yml
I0608 10:49:03.877271 1176967 loader.go:372] Config loaded from file:  /home/user/.kube/config
F0608 10:49:03.877339 1176967 helpers.go:118] error: open .lock: no such file or directory
goroutine 1 [running]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1038 +0x8a
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x3080040, 0x3, 0x0, 0xc0007a2000, 0x2, {0x25f2be7, 0x10}, 0x3080f20, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:987 +0x5fd
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0xc0004283f0, 0x2c, 0x0, {0x0, 0x0}, 0xc000394300, {0xc00041cff0, 0x1, 0x1})
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:735 +0x1ae
k8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1518
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal({0xc0004283f0, 0x2c}, 0xc000b45ba0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:96 +0xc5
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr({0x1fe9a80, 0xc000b810b0}, 0x1e79770)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:191 +0x7d7
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:118
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/config.NewCmdConfigUseContext.func1(0xc000b0d400, {0xc000595480, 0x1, 0x2})
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/config/use_context.go:59 +0x72
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc000b0d400, {0xc000595460, 0x2, 0x2})
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:860 +0x5f8
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc0001e3b80)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:974 +0x3bc
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:902
k8s.io/kubernetes/vendor/k8s.io/component-base/cli.run(0xc0001e3b80)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/cli/run.go:146 +0x325
k8s.io/kubernetes/vendor/k8s.io/component-base/cli.RunNoErrOutput(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/cli/run.go:84
main.main()
	_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubectl/kubectl.go:30 +0x1e

goroutine 6 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1181 +0x6a
created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:420 +0xfb

goroutine 38 [select]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0, {0x1feb900, 0xc000b80000}, 0x1, 0xc00004a480)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x13b
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0, 0x12a05f200, 0x0, 0x0, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x89
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x0, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:81 +0x28
created by k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/logs.go:179 +0x85

Excerpt from strace:

[pid 1194553] newfstatat(AT_FDCWD, ".", {st_dev=makedev(0xfd, 0x2), st_ino=3261019, st_mode=S_IFDIR|0755, st_nlink=0, st_uid=1000, st_gid=1000, st_blksize=4096, st_blocks=8, st_size=0, st_atime=1686144170 /* 2023-06-07T16:22:50.175271454+0300 */, st_atime_nsec=175271454, st_mtime=1686151444 /* 2023-06-07T18:24:04.465271113+0300 */, st_mtime_nsec=465271113, st_ctime=1686151444 /* 2023-06-07T18:24:04.465271113+0300 */, st_ctime_nsec=465271113}, 0) = 0
[pid 1194553] openat(AT_FDCWD, ".lock", O_RDONLY|O_CREAT|O_EXCL|O_CLOEXEC, 000) = -1 ENOENT (Toks failas ar aplankas neegzistuoja)
[pid 1194553] write(2, "error: open .lock: no such file "..., 45error: open .lock: no such file or directory

What did you expect to happen?

I was hoping kubectl would switch contexts

How can we reproduce it (as minimally and precisely as possible)?

No clue

Anything else we need to know?

No response

Kubernetes version

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.5", GitCommit:"c285e781331a3785a7f436042c65c5641ce8a9e9", GitTreeState:"clean", BuildDate:"2022-03-16T15:58:47Z", GoVersion:"go1.17.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.13", GitCommit:"a43c0904d0de10f92aa3956c74489c45e6453d6e", GitTreeState:"clean", BuildDate:"2022-08-17T18:23:45Z", GoVersion:"go1.16.15", Compiler:"gc", Platform:"linux/amd64"}

Cloud provider

N/A

OS version

# On Linux:
$ cat /etc/os-release
NAME="Linux Mint"
VERSION="20.3 (Una)"
ID=linuxmint
ID_LIKE=ubuntu
PRETTY_NAME="Linux Mint 20.3"
VERSION_ID="20.3"
HOME_URL="https://www.linuxmint.com/"
SUPPORT_URL="https://forums.linuxmint.com/"
BUG_REPORT_URL="http://linuxmint-troubleshooting-guide.readthedocs.io/en/latest/"
PRIVACY_POLICY_URL="https://www.linuxmint.com/"
VERSION_CODENAME=una
UBUNTU_CODENAME=focal
$ uname -a
Linux <HIDDEN> 5.15.6-051506-generic #202112010437 SMP Wed Dec 1 09:47:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux


</details>


### Install tools

<details>

</details>


### Container runtime (CRI) and version (if applicable)

<details>

</details>


### Related plugins (CNI, CSI, ...) and versions (if applicable)

<details>

</details>
@netikras netikras added the kind/bug Categorizes issue or PR as related to a bug. label Jun 8, 2023
@k8s-ci-robot k8s-ci-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Jun 8, 2023
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Jun 8, 2023
@netikras
Copy link
Author

netikras commented Jun 8, 2023

Alright, turns out my CWD directory was a removed directory (I deleted it in another shell).

openat(AT_FDCWD, ".lock"

↑ this was the clue.

Why does kubectl create .lock in CWD though? Does that mean it would fail if my CWD was a read-only filesystem? Is this intended?
What if CWD was a monitored filesystem, and any files created there would trigger an event for an application to perform some action on them?

I don't think that's safe. Do you?

@HirazawaUi
Copy link
Contributor

/sig cli

@k8s-ci-robot k8s-ci-robot added sig/cli Categorizes an issue or PR as relevant to SIG CLI. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Jun 8, 2023
@ardaguclu
Copy link
Member

Lock is only used to modify kubeconfig to prevent multiple writes as you can see here

if err := lockFile(filename); err != nil {
return err
}
defer unlockFile(filename)
. But after modification completed, it removes the lock immediately(you can seen this in unlock function).

But your kubectl version seems to be old. Could you please try with the new one?. Thanks.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 6, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 5, 2024
@netikras
Copy link
Author

netikras commented Apr 6, 2024

@ardaguclu thank you for the explanation.

Regardless, I still think it it a rather bad idea to try and create any files in cwd, whether it's a lock file that's removed immediately after or not. IMO the right way is to create such files in a predefined full-path (e.g. ~/.kube/.lock) rather than whereever the user is in (e.g. a network filesystem, an application's directory that's monitoring said directory for any changes and can crash if an unrecognized file is created there, etc.)

@brianpursley
Copy link
Member

I tried to reproduce this, deleted my cwd, and the command still succeeded (I'm using kubectl 1.28, but also tried 1.22 and it worked for both versions):

~/projects/test1234 $ ls -la
total 0
~/projects/test1234 $ ls -la $(pwd)
ls: cannot access '/home/brian/projects/test1234': No such file or directory
~/projects/test1234 $ kubectl config use-context cluster1
Switched to context "cluster1".
~/projects/test1234 $ 

It looks like the code attempts to create a lock file relative to the config file, not the cwd:

possibleSources := configAccess.GetLoadingPrecedence()
// sort the possible kubeconfig files so we always "lock" in the same order
// to avoid deadlock (note: this can fail w/ symlinks, but... come on).
sort.Strings(possibleSources)
for _, filename := range possibleSources {
if err := lockFile(filename); err != nil {
return err
}
defer unlockFile(filename)
}

dir := filepath.Dir(filename)
if _, err := os.Stat(dir); os.IsNotExist(err) {
if err = os.MkdirAll(dir, 0755); err != nil {
return err
}
}
f, err := os.OpenFile(lockName(filename), os.O_CREATE|os.O_EXCL, 0)

@netikras Am I misunderstanding the problem? Do you have a different way to reproduce it than what I attempted above?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. sig/cli Categorizes an issue or PR as relevant to SIG CLI.
Projects
Status: Needs Triage
Development

No branches or pull requests

6 participants