Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docker: Error response from daemon: Mounts denied: EOF. #8832

Closed
medyagh opened this issue Jul 24, 2020 · 4 comments
Closed

docker: Error response from daemon: Mounts denied: EOF. #8832

medyagh opened this issue Jul 24, 2020 · 4 comments
Assignees
Labels
co/docker-driver Issues related to kubernetes in container kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.

Comments

@medyagh
Copy link
Member

medyagh commented Jul 24, 2020

this was due to me doing a lot of experiement on docker settings, I made this error happen, I was able to fix it by factory reseting my docker desktop.

we should tell users to do same as a solution message

https://stackoverflow.com/questions/45122459/docker-mounts-denied-the-paths-are-not-shared-from-os-x-and-are-not-known/45123074

medya@~/workspace/minikube (kic_runner_entry) $ ./out/minikube start --driver=docker --alsologtostderr
I0724 11:35:35.184476   28388 out.go:188] Setting JSON to false
I0724 11:35:35.236034   28388 start.go:101] hostinfo: {"hostname":"medya-macbookpro3.roam.corp.google.com","uptime":222753,"bootTime":1595392982,"procs":504,"os":"darwin","platform":"darwin","platformFamily":"","platformVersion":"10.15.6","kernelVersion":"19.6.0","virtualizationSystem":"","virtualizationRole":"","hostid":"783946ea-6f11-3647-bf90-787aea14b954"}
W0724 11:35:35.236143   28388 start.go:109] gopshost.Virtualization returned error: not implemented yet
😄  minikube v1.12.1 on Darwin 10.15.6
I0724 11:35:35.258193   28388 driver.go:287] Setting default libvirt URI to qemu:///system
I0724 11:35:35.258251   28388 notify.go:125] Checking for updates...
I0724 11:35:35.305783   28388 docker.go:87] docker version: linux-19.03.8
✨  Using the docker driver based on user configuration
I0724 11:35:35.319000   28388 start.go:217] selected driver: docker
I0724 11:35:35.319009   28388 start.go:623] validating driver "docker" against <nil>
I0724 11:35:35.319026   28388 start.go:634] status for docker: {Installed:true Healthy:true NeedsImprovement:false Error:<nil> Fix: Doc:}
I0724 11:35:35.319158   28388 cli_runner.go:109] Run: docker system info --format "{{json .}}"
I0724 11:35:35.421630   28388 start_flags.go:223] no existing cluster config was found, will generate one from the flags 
I0724 11:35:35.421683   28388 start_flags.go:240] Using suggested 3892MB memory alloc based on sys=16384MB, container=3940MB
I0724 11:35:35.421784   28388 start_flags.go:599] Wait components to verify : map[apiserver:true system_pods:true]
I0724 11:35:35.421801   28388 cni.go:74] Creating CNI manager for ""
I0724 11:35:35.421805   28388 cni.go:117] CNI unnecessary in this configuration, recommending no CNI
I0724 11:35:35.421808   28388 start_flags.go:345] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 Memory:3892 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.3 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true]}
👍  Starting control plane node minikube in cluster minikube
I0724 11:35:35.472403   28388 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 in local docker daemon, skipping pull
I0724 11:35:35.472449   28388 cache.go:113] gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 exists in daemon, skipping pull
I0724 11:35:35.472465   28388 preload.go:97] Checking if preload exists for k8s version v1.18.3 and runtime docker
I0724 11:35:35.472558   28388 preload.go:105] Found local preload: /Users/medya/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v4-v1.18.3-docker-overlay2-amd64.tar.lz4
I0724 11:35:35.472567   28388 cache.go:51] Caching tarball of preloaded images
I0724 11:35:35.472585   28388 preload.go:131] Found /Users/medya/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v4-v1.18.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0724 11:35:35.472588   28388 cache.go:54] Finished verifying existence of preloaded tar for  v1.18.3 on docker
I0724 11:35:35.472912   28388 profile.go:150] Saving config to /Users/medya/.minikube/profiles/minikube/config.json ...
I0724 11:35:35.473099   28388 lock.go:35] WriteFile acquiring /Users/medya/.minikube/profiles/minikube/config.json: {Name:mkcfdcaaa21816d14cd9720660d7b2e91b28d741 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0724 11:35:35.474360   28388 cache.go:178] Successfully downloaded all kic artifacts
I0724 11:35:35.474400   28388 start.go:241] acquiring machines lock for minikube: {Name:mk776146a90e3c3e4f2a4d11e614d78349a56d54 Clock:{} Delay:500ms Timeout:15m0s Cancel:<nil>}
I0724 11:35:35.474478   28388 start.go:245] acquired machines lock for "minikube" in 65.125µs
I0724 11:35:35.474513   28388 start.go:85] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 Memory:3892 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.3 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.3 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true]} &{Name: IP: Port:8443 KubernetesVersion:v1.18.3 ControlPlane:true Worker:true}
I0724 11:35:35.474588   28388 start.go:122] createHost starting for "" (driver="docker")
🔥  Creating docker container (CPUs=2, Memory=3892MB) ...
I0724 11:35:35.492331   28388 start.go:158] libmachine.API.Create for "minikube" (driver="docker")
I0724 11:35:35.492363   28388 client.go:161] LocalClient.Create starting
I0724 11:35:35.492464   28388 main.go:115] libmachine: Reading certificate data from /Users/medya/.minikube/certs/ca.pem
I0724 11:35:35.492793   28388 main.go:115] libmachine: Decoding PEM data...
I0724 11:35:35.492812   28388 main.go:115] libmachine: Parsing certificate...
I0724 11:35:35.493243   28388 main.go:115] libmachine: Reading certificate data from /Users/medya/.minikube/certs/cert.pem
I0724 11:35:35.493713   28388 main.go:115] libmachine: Decoding PEM data...
I0724 11:35:35.493738   28388 main.go:115] libmachine: Parsing certificate...
I0724 11:35:35.494630   28388 cli_runner.go:109] Run: docker ps -a --format {{.Names}}
I0724 11:35:35.530316   28388 cli_runner.go:109] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true
I0724 11:35:35.567742   28388 oci.go:101] Successfully created a docker volume minikube
I0724 11:35:35.567887   28388 cli_runner.go:109] Run: docker run --rm --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 -d /var/lib
I0724 11:35:36.024667   28388 oci.go:105] Successfully prepared a docker volume minikube
I0724 11:35:36.024746   28388 preload.go:97] Checking if preload exists for k8s version v1.18.3 and runtime docker
I0724 11:35:36.024808   28388 preload.go:105] Found local preload: /Users/medya/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v4-v1.18.3-docker-overlay2-amd64.tar.lz4
I0724 11:35:36.024817   28388 kic.go:133] Starting extracting preloaded images to volume ...
I0724 11:35:36.024818   28388 cli_runner.go:109] Run: docker info --format "'{{json .SecurityOptions}}'"
I0724 11:35:36.024954   28388 cli_runner.go:109] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/medya/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v4-v1.18.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 -I lz4 -xvf /preloaded.tar -C /extractDir
I0724 11:35:36.116349   28388 kic.go:136] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v /Users/medya/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v4-v1.18.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 -I lz4 -xvf /preloaded.tar -C /extractDir: exit status 125
stdout:

stderr:
docker: Error response from daemon: Mounts denied: EOF.
time="2020-07-24T11:35:36-07:00" level=error msg="error waiting for container: context canceled"
I0724 11:35:36.122408   28388 cli_runner.go:109] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --volume minikube:/var --security-opt apparmor=unconfined --cpus=2 --memory=3892mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438
I0724 11:35:36.197459   28388 client.go:164] LocalClient.Create took 705.081356ms
I0724 11:35:38.202524   28388 start.go:125] duration metric: createHost completed in 2.72788781s
I0724 11:35:38.202602   28388 start.go:76] releasing machines lock for "minikube", held for 2.728084294s
I0724 11:35:38.204411   28388 cli_runner.go:109] Run: docker container inspect minikube --format={{.State.Status}}
W0724 11:35:38.248954   28388 start.go:379] delete host: Docker machine "minikube" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
🤦  StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --volume minikube:/var --security-opt apparmor=unconfined --cpus=2 --memory=3892mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438: exit status 125
stdout:
bc1d5e677094d51da5743d362cd0a1f75b845fa6eadbc8327fefa018f16754ad

stderr:
docker: Error response from daemon: Mounts denied: EOF.

I0724 11:35:43.253077   28388 start.go:241] acquiring machines lock for minikube: {Name:mk776146a90e3c3e4f2a4d11e614d78349a56d54 Clock:{} Delay:500ms Timeout:15m0s Cancel:<nil>}
I0724 11:35:43.253412   28388 start.go:245] acquired machines lock for "minikube" in 271.818µs
I0724 11:35:43.253485   28388 start.go:89] Skipping create...Using existing machine configuration
I0724 11:35:43.253518   28388 fix.go:53] fixHost starting: 
I0724 11:35:43.254413   28388 cli_runner.go:109] Run: docker container inspect minikube --format={{.State.Status}}
I0724 11:35:43.301537   28388 fix.go:105] recreateIfNeeded on minikube: state= err=<nil>
I0724 11:35:43.301577   28388 fix.go:110] machineExists: false. err=machine does not exist
🤷  docker "minikube" container is missing, will recreate.
I0724 11:35:43.321289   28388 delete.go:123] DEMOLISHING minikube ...
I0724 11:35:43.321438   28388 cli_runner.go:109] Run: docker container inspect minikube --format={{.State.Status}}
I0724 11:35:43.358512   28388 stop.go:76] host is in state 
I0724 11:35:43.358593   28388 main.go:115] libmachine: Stopping "minikube"...
I0724 11:35:43.358775   28388 cli_runner.go:109] Run: docker container inspect minikube --format={{.State.Status}}
I0724 11:35:43.397264   28388 kic_runner.go:93] Run: systemctl --version
I0724 11:35:43.397286   28388 kic_runner.go:114] Args: [docker exec --privileged minikube systemctl --version]
I0724 11:35:43.435697   28388 kic_runner.go:93] Run: sudo service kubelet stop
I0724 11:35:43.435716   28388 kic_runner.go:114] Args: [docker exec --privileged minikube sudo service kubelet stop]
I0724 11:35:43.474156   28388 openrc.go:134] stop output: 
** stderr ** 
Error response from daemon: Container bc1d5e677094d51da5743d362cd0a1f75b845fa6eadbc8327fefa018f16754ad is not running

** /stderr **
W0724 11:35:43.474178   28388 kic.go:341] couldn't stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
stdout:

stderr:
Error response from daemon: Container bc1d5e677094d51da5743d362cd0a1f75b845fa6eadbc8327fefa018f16754ad is not running
I0724 11:35:43.474332   28388 kic_runner.go:93] Run: sudo service kubelet stop
I0724 11:35:43.474342   28388 kic_runner.go:114] Args: [docker exec --privileged minikube sudo service kubelet stop]
I0724 11:35:43.514850   28388 openrc.go:134] stop output: 
** stderr ** 
Error response from daemon: Container bc1d5e677094d51da5743d362cd0a1f75b845fa6eadbc8327fefa018f16754ad is not running

** /stderr **
W0724 11:35:43.514881   28388 kic.go:343] couldn't force stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
stdout:

stderr:
Error response from daemon: Container bc1d5e677094d51da5743d362cd0a1f75b845fa6eadbc8327fefa018f16754ad is not running
I0724 11:35:43.515038   28388 kic_runner.go:93] Run: docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}
I0724 11:35:43.515048   28388 kic_runner.go:114] Args: [docker exec --privileged minikube docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}]
I0724 11:35:43.554305   28388 kic.go:354] unable list containers : docker: docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}: exit status 1
stdout:

stderr:
Error response from daemon: Container bc1d5e677094d51da5743d362cd0a1f75b845fa6eadbc8327fefa018f16754ad is not running
I0724 11:35:43.554338   28388 kic.go:364] successfully stopped kubernetes!
I0724 11:35:43.554478   28388 kic_runner.go:93] Run: pgrep kube-apiserver
I0724 11:35:43.554486   28388 kic_runner.go:114] Args: [docker exec --privileged minikube pgrep kube-apiserver]
I0724 11:35:43.628078   28388 cli_runner.go:109] Run: docker container inspect minikube --format={{.State.Status}}
I0724 11:35:46.669258   28388 cli_runner.go:109] Run: docker container inspect minikube --format={{.State.Status}}
@medyagh medyagh added co/docker-driver Issues related to kubernetes in container needs-problem-regex needs-solution-message Issues where where offering a solution for an error would be helpful labels Jul 24, 2020
@sharifelgamal sharifelgamal added kind/bug Categorizes issue or PR as related to a bug. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. labels Jul 27, 2020
@tstromberg tstromberg removed needs-problem-regex needs-solution-message Issues where where offering a solution for an error would be helpful labels Sep 1, 2020
@tstromberg tstromberg self-assigned this Sep 1, 2020
This was referenced Sep 3, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 30, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 30, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/docker-driver Issues related to kubernetes in container kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
Development

No branches or pull requests

5 participants