Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

启动minikube失败,容器选用docker #13504

Closed
wer8956741 opened this issue Jan 27, 2022 · 9 comments
Closed

启动minikube失败,容器选用docker #13504

wer8956741 opened this issue Jan 27, 2022 · 9 comments
Labels
kind/support Categorizes issue or PR as a support question. l/zh-CN Issues in or relating to Chinese lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. long-term-support Long-term support issues that can't be fixed in code

Comments

@wer8956741
Copy link

重现问题所需的命令
minikube start

失败的命令的完整输出


😄 Darwin 12.2 (arm64) 上的 minikube v1.25.1
🆕 Kubernetes 1.23.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.1
✨ 根据现有的配置文件使用 docker 驱动程序
👍 Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
🏃 Updating the running docker "minikube" container ...

❌ Exiting due to RUNTIME_ENABLE: sudo systemctl start docker: Process exited with status 1
stdout:

stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xe" for details.

minikube logs命令的输出


*

  • ==> Audit <==

  • |---------|------|---------|------|---------|------------|----------|
    | Command | Args | Profile | User | Version | Start Time | End Time |
    |---------|------|---------|------|---------|------------|----------|
    |---------|------|---------|------|---------|------------|----------|

  • ==> Last Start <==

  • Log file created at: 2022/01/27 17:03:49
    Running on machine: LXTdeMacBook-Pro
    Binary: Built with gc go1.17.5 for darwin/arm64
    Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
    I0127 17:03:49.557620 98029 out.go:297] Setting OutFile to fd 1 ...
    I0127 17:03:49.558074 98029 out.go:349] isatty.IsTerminal(1) = true
    I0127 17:03:49.558076 98029 out.go:310] Setting ErrFile to fd 2...
    I0127 17:03:49.558079 98029 out.go:349] isatty.IsTerminal(2) = true
    I0127 17:03:49.558164 98029 root.go:315] Updating PATH: /Users/lxt/.minikube/bin
    W0127 17:03:49.558237 98029 root.go:293] Error reading config file at /Users/lxt/.minikube/config/config.json: open /Users/lxt/.minikube/config/config.json: no such file or directory
    I0127 17:03:49.558352 98029 out.go:304] Setting JSON to false
    I0127 17:03:49.586941 98029 start.go:112] hostinfo: {"hostname":"LXTdeMacBook-Pro.local","uptime":3817,"bootTime":1643270412,"procs":377,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.2","kernelVersion":"21.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"a9cc8b92-7a9f-5432-a592-dd79cff8e04c"}
    W0127 17:03:49.587029 98029 start.go:120] gopshost.Virtualization returned error: not implemented yet
    I0127 17:03:49.607469 98029 out.go:176] 😄 Darwin 12.2 (arm64) 上的 minikube v1.25.1
    I0127 17:03:49.607768 98029 notify.go:174] Checking for updates...
    W0127 17:03:49.608126 98029 preload.go:294] Failed to list preload files: open /Users/lxt/.minikube/cache/preloaded-tarball: no such file or directory
    I0127 17:03:49.608797 98029 config.go:176] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.3
    I0127 17:03:49.649400 98029 out.go:176] 🆕 Kubernetes 1.23.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.1
    I0127 17:03:49.650171 98029 driver.go:344] Setting default libvirt URI to qemu:///system
    I0127 17:03:49.771239 98029 docker.go:132] docker version: linux-20.10.12
    I0127 17:03:49.771560 98029 cli_runner.go:133] Run: docker system info --format "{{json .}}"
    I0127 17:03:50.585159 98029 info.go:263] docker info: {ID:5QLK:6TM7:TPVG:IJX2:XMUT:C2MD:KQH6:ZFXW:SJFI:CK42:YKB6:X2LU Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:false NGoroutines:48 SystemTime:2022-01-27 09:03:49.86526076 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:2085294080 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.2.3] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.16.0]] Warnings:}}
    I0127 17:03:50.619960 98029 out.go:176] ✨ 根据现有的配置文件使用 docker 驱动程序
    I0127 17:03:50.619985 98029 start.go:280] selected driver: docker
    I0127 17:03:50.619988 98029 start.go:795] validating driver "docker" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 Memory:1988 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.3 ClusterName:minikube Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository:registry.cn-hangzhou.aliyuncs.com/google_containers LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror:}
    I0127 17:03:50.620039 98029 start.go:806] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:}
    I0127 17:03:50.620052 98029 start.go:1498] auto setting extra-config to "kubelet.housekeeping-interval=5m".
    I0127 17:03:50.620305 98029 cli_runner.go:133] Run: docker system info --format "{{json .}}"
    I0127 17:03:50.808922 98029 info.go:263] docker info: {ID:5QLK:6TM7:TPVG:IJX2:XMUT:C2MD:KQH6:ZFXW:SJFI:CK42:YKB6:X2LU Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:false NGoroutines:48 SystemTime:2022-01-27 09:03:50.721032552 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.76-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:2085294080 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.2.3] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.16.0]] Warnings:}}
    I0127 17:03:50.812979 98029 cni.go:93] Creating CNI manager for ""
    I0127 17:03:50.812993 98029 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
    I0127 17:03:50.813011 98029 start_flags.go:300] config:
    {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 Memory:1988 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.3 ClusterName:minikube Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository:registry.cn-hangzhou.aliyuncs.com/google_containers LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror:}
    I0127 17:03:50.850627 98029 out.go:176] 👍 Starting control plane node minikube in cluster minikube
    I0127 17:03:50.850917 98029 cache.go:120] Beginning downloading kic base image for docker with docker
    I0127 17:03:50.868582 98029 out.go:176] 🚜 Pulling base image ...
    I0127 17:03:50.868990 98029 profile.go:147] Saving config to /Users/lxt/.minikube/profiles/minikube/config.json ...
    I0127 17:03:50.869271 98029 image.go:75] Checking for registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 in local docker daemon
    I0127 17:03:50.869746 98029 cache.go:107] acquiring lock: {Name:mk1adb1f2381b1c6e79ba34aac6811ead7b7a2a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
    I0127 17:03:50.870187 98029 cache.go:107] acquiring lock: {Name:mk3522e1e9602861f3ab66e6f79da2aaf83da512 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
    I0127 17:03:50.870337 98029 cache.go:107] acquiring lock: {Name:mkaf55e2448fa819e22eef587b8eb2b12c3bc2a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
    I0127 17:03:50.870261 98029 cache.go:107] acquiring lock: {Name:mkb4b1f79b1a30d8a91ea8ede2dc8b11e018f27c Clock:{} Delay:500ms Timeout:10m0s Cancel:}
    I0127 17:03:50.870441 98029 cache.go:107] acquiring lock: {Name:mk446134d3a83fafdba1923cf6266f393ce252d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
    I0127 17:03:50.870469 98029 cache.go:107] acquiring lock: {Name:mka1516b33fd9b2a44b1cd6d7f08c4e8532269c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
    I0127 17:03:50.870490 98029 cache.go:107] acquiring lock: {Name:mk1e8505260ba385e879a29fac4f75b8cd0f2ada Clock:{} Delay:500ms Timeout:10m0s Cancel:}
    I0127 17:03:50.870359 98029 cache.go:107] acquiring lock: {Name:mkd0a3415bb316177073bc397fa8adf067752a28 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
    I0127 17:03:50.870505 98029 cache.go:107] acquiring lock: {Name:mk14a3ec6227874d1cedb5e0658a884745a135e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
    I0127 17:03:50.870685 98029 cache.go:107] acquiring lock: {Name:mk98de2f4a708a3bcce98ed879703de0f049c5b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
    I0127 17:03:50.887221 98029 cache.go:115] /Users/lxt/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy_v1.18.3 exists
    I0127 17:03:50.887220 98029 cache.go:115] /Users/lxt/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/coredns_1.6.7 exists
    I0127 17:03:50.887254 98029 cache.go:115] /Users/lxt/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler_v1.18.3 exists
    I0127 17:03:50.887218 98029 cache.go:115] /Users/lxt/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager_v1.18.3 exists
    I0127 17:03:50.887269 98029 cache.go:96] cache image "registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.18.3" -> "/Users/lxt/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy_v1.18.3" took 16.771584ms
    I0127 17:03:50.887257 98029 cache.go:115] /Users/lxt/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver_v1.18.3 exists
    I0127 17:03:50.887320 98029 cache.go:96] cache image "registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.18.3" -> "/Users/lxt/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler_v1.18.3" took 17.097584ms
    I0127 17:03:50.887336 98029 cache.go:80] save to tar file registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.18.3 -> /Users/lxt/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy_v1.18.3 succeeded
    I0127 17:03:50.887264 98029 cache.go:96] cache image "registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.7" -> "/Users/lxt/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/coredns_1.6.7" took 16.903833ms
    I0127 17:03:50.887340 98029 cache.go:115] /Users/lxt/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/etcd_3.4.3-0 exists
    I0127 17:03:50.887347 98029 cache.go:96] cache image "registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0" -> "/Users/lxt/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/etcd_3.4.3-0" took 16.858875ms
    I0127 17:03:50.887345 98029 cache.go:96] cache image "registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.18.3" -> "/Users/lxt/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver_v1.18.3" took 17.39475ms
    I0127 17:03:50.887351 98029 cache.go:80] save to tar file registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0 -> /Users/lxt/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/etcd_3.4.3-0 succeeded
    I0127 17:03:50.887274 98029 image.go:134] retrieving image: registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetesui/metrics-scraper:v1.0.7
    I0127 17:03:50.887353 98029 cache.go:115] /Users/lxt/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/pause_3.2 exists
    I0127 17:03:50.887352 98029 cache.go:80] save to tar file registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.18.3 -> /Users/lxt/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver_v1.18.3 succeeded
    I0127 17:03:50.887356 98029 cache.go:96] cache image "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2" -> "/Users/lxt/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/pause_3.2" took 17.247917ms
    I0127 17:03:50.887360 98029 cache.go:80] save to tar file registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 -> /Users/lxt/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/pause_3.2 succeeded
    I0127 17:03:50.887279 98029 image.go:134] retrieving image: registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetesui/dashboard:v2.3.1
    I0127 17:03:50.887317 98029 cache.go:96] cache image "registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.18.3" -> "/Users/lxt/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager_v1.18.3" took 16.81025ms
    I0127 17:03:50.887347 98029 cache.go:80] save to tar file registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.7 -> /Users/lxt/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/coredns_1.6.7 succeeded
    I0127 17:03:50.887369 98029 cache.go:80] save to tar file registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.18.3 -> /Users/lxt/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager_v1.18.3 succeeded
    I0127 17:03:50.887375 98029 cache.go:80] save to tar file registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.18.3 -> /Users/lxt/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler_v1.18.3 succeeded
    I0127 17:03:50.887423 98029 image.go:134] retrieving image: registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-minikube/storage-provisioner:v5
    I0127 17:03:50.895477 98029 image.go:180] daemon lookup for registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-minikube/storage-provisioner:v5: Error response from daemon: reference does not exist
    I0127 17:03:50.896498 98029 image.go:180] daemon lookup for registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetesui/metrics-scraper:v1.0.7: Error response from daemon: reference does not exist
    I0127 17:03:50.897863 98029 image.go:180] daemon lookup for registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetesui/dashboard:v2.3.1: Error response from daemon: reference does not exist
    I0127 17:03:50.981014 98029 image.go:79] Found registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 in local docker daemon, skipping pull
    I0127 17:03:50.981054 98029 cache.go:142] registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 exists in daemon, skipping load
    I0127 17:03:50.981071 98029 cache.go:208] Successfully downloaded all kic artifacts
    I0127 17:03:50.981137 98029 start.go:313] acquiring machines lock for minikube: {Name:mkb9fb1ef68b31a8385c3923e327f807be3e96b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
    I0127 17:03:50.981292 98029 start.go:317] acquired machines lock for "minikube" in 144.875µs
    I0127 17:03:50.981307 98029 start.go:93] Skipping create...Using existing machine configuration
    I0127 17:03:50.981321 98029 fix.go:55] fixHost starting:
    I0127 17:03:50.981532 98029 cli_runner.go:133] Run: docker container inspect minikube --format={{.State.Status}}
    I0127 17:03:51.084652 98029 fix.go:108] recreateIfNeeded on minikube: state=Stopped err=
    W0127 17:03:51.084676 98029 fix.go:134] unexpected machine state, will restart:
    I0127 17:03:51.121621 98029 out.go:176] 🔄 Restarting existing docker container for "minikube" ...
    I0127 17:03:51.121694 98029 cli_runner.go:133] Run: docker start minikube
    I0127 17:03:51.470505 98029 cli_runner.go:133] Run: docker container inspect minikube --format={{.State.Status}}
    I0127 17:03:51.582267 98029 kic.go:420] container "minikube" state is running.
    I0127 17:03:51.582704 98029 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
    I0127 17:03:51.690573 98029 profile.go:147] Saving config to /Users/lxt/.minikube/profiles/minikube/config.json ...
    I0127 17:03:51.690918 98029 machine.go:88] provisioning docker machine ...
    I0127 17:03:51.690925 98029 ubuntu.go:169] provisioning hostname "minikube"
    I0127 17:03:51.690971 98029 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
    W0127 17:03:51.743784 98029 image.go:268] image registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
    I0127 17:03:51.743814 98029 cache.go:161] opening: /Users/lxt/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-minikube/storage-provisioner_v5
    W0127 17:03:51.759984 98029 image.go:268] image registry.cn-hangzhou.aliyuncs.com/google_containers/dashboard:v2.3.1 arch mismatch: want arm64 got amd64. fixing
    I0127 17:03:51.760031 98029 cache.go:161] opening: /Users/lxt/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetesui/dashboard_v2.3.1
    I0127 17:03:51.799437 98029 main.go:130] libmachine: Using SSH client type: native
    I0127 17:03:51.799620 98029 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x10135f500] 0x101362320 [] 0s} 127.0.0.1 50889 }
    I0127 17:03:51.799625 98029 main.go:130] libmachine: About to run SSH command:
    sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
    I0127 17:03:51.801260 98029 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: EOF
    W0127 17:03:51.835476 98029 image.go:268] image registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-scraper:v1.0.7 arch mismatch: want arm64 got amd64. fixing
    I0127 17:03:51.835536 98029 cache.go:161] opening: /Users/lxt/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetesui/metrics-scraper_v1.0.7
    I0127 17:03:52.883374 98029 cache.go:156] /Users/lxt/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-minikube/storage-provisioner_v5 exists
    I0127 17:03:52.883397 98029 cache.go:96] cache image "registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-minikube/storage-provisioner:v5" -> "/Users/lxt/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-minikube/storage-provisioner_v5" took 2.013092334s
    I0127 17:03:52.883411 98029 cache.go:80] save to tar file registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-minikube/storage-provisioner:v5 -> /Users/lxt/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-minikube/storage-provisioner_v5 succeeded
    I0127 17:03:54.220388 98029 cache.go:156] /Users/lxt/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetesui/metrics-scraper_v1.0.7 exists
    I0127 17:03:54.220442 98029 cache.go:96] cache image "registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetesui/metrics-scraper:v1.0.7" -> "/Users/lxt/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetesui/metrics-scraper_v1.0.7" took 3.351052292s
    I0127 17:03:54.220484 98029 cache.go:80] save to tar file registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetesui/metrics-scraper:v1.0.7 -> /Users/lxt/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetesui/metrics-scraper_v1.0.7 succeeded
    I0127 17:03:55.310557 98029 main.go:130] libmachine: SSH cmd err, output: : minikube

I0127 17:03:55.310686 98029 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0127 17:03:55.435424 98029 main.go:130] libmachine: Using SSH client type: native
I0127 17:03:55.435556 98029 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x10135f500] 0x101362320 [] 0s} 127.0.0.1 50889 }
I0127 17:03:55.435564 98029 main.go:130] libmachine: About to run SSH command:

	if ! grep -xq '.*\sminikube' /etc/hosts; then
		if grep -xq '127.0.1.1\s.*' /etc/hosts; then
			sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
		else 
			echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; 
		fi
	fi

I0127 17:03:55.789193 98029 main.go:130] libmachine: SSH cmd err, output: :
I0127 17:03:55.789230 98029 ubuntu.go:175] set auth options {CertDir:/Users/lxt/.minikube CaCertPath:/Users/lxt/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/lxt/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/lxt/.minikube/machines/server.pem ServerKeyPath:/Users/lxt/.minikube/machines/server-key.pem ClientKeyPath:/Users/lxt/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/lxt/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/lxt/.minikube}
I0127 17:03:55.789261 98029 ubuntu.go:177] setting up certificates
I0127 17:03:55.789269 98029 provision.go:83] configureAuth start
I0127 17:03:55.789415 98029 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0127 17:03:55.895737 98029 provision.go:138] copyHostCerts
I0127 17:03:55.897101 98029 exec_runner.go:144] found /Users/lxt/.minikube/cert.pem, removing ...
I0127 17:03:55.897120 98029 exec_runner.go:207] rm: /Users/lxt/.minikube/cert.pem
I0127 17:03:55.897226 98029 exec_runner.go:151] cp: /Users/lxt/.minikube/certs/cert.pem --> /Users/lxt/.minikube/cert.pem (1070 bytes)
I0127 17:03:55.897588 98029 exec_runner.go:144] found /Users/lxt/.minikube/key.pem, removing ...
I0127 17:03:55.897591 98029 exec_runner.go:207] rm: /Users/lxt/.minikube/key.pem
I0127 17:03:55.897639 98029 exec_runner.go:151] cp: /Users/lxt/.minikube/certs/key.pem --> /Users/lxt/.minikube/key.pem (1675 bytes)
I0127 17:03:55.897923 98029 exec_runner.go:144] found /Users/lxt/.minikube/ca.pem, removing ...
I0127 17:03:55.897925 98029 exec_runner.go:207] rm: /Users/lxt/.minikube/ca.pem
I0127 17:03:55.897974 98029 exec_runner.go:151] cp: /Users/lxt/.minikube/certs/ca.pem --> /Users/lxt/.minikube/ca.pem (1025 bytes)
I0127 17:03:55.898201 98029 provision.go:112] generating server cert: /Users/lxt/.minikube/machines/server.pem ca-key=/Users/lxt/.minikube/certs/ca.pem private-key=/Users/lxt/.minikube/certs/ca-key.pem org=lxt.minikube san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube minikube]
I0127 17:03:55.997243 98029 provision.go:172] copyRemoteCerts
I0127 17:03:55.997846 98029 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0127 17:03:55.997901 98029 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0127 17:03:56.101782 98029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50889 SSHKeyPath:/Users/lxt/.minikube/machines/minikube/id_rsa Username:docker}
I0127 17:03:56.382559 98029 ssh_runner.go:362] scp /Users/lxt/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0127 17:03:56.661668 98029 ssh_runner.go:362] scp /Users/lxt/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1025 bytes)
I0127 17:03:56.934920 98029 ssh_runner.go:362] scp /Users/lxt/.minikube/machines/server.pem --> /etc/docker/server.pem (1143 bytes)
I0127 17:03:57.206665 98029 provision.go:86] duration metric: configureAuth took 1.417041125s
I0127 17:03:57.206684 98029 ubuntu.go:193] setting minikube options for container-runtime
I0127 17:03:57.206948 98029 config.go:176] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.3
I0127 17:03:57.207050 98029 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0127 17:03:57.318122 98029 main.go:130] libmachine: Using SSH client type: native
I0127 17:03:57.318248 98029 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x10135f500] 0x101362320 [] 0s} 127.0.0.1 50889 }
I0127 17:03:57.318252 98029 main.go:130] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0127 17:03:57.668921 98029 main.go:130] libmachine: SSH cmd err, output: : overlay

I0127 17:03:57.668946 98029 ubuntu.go:71] root file system type: overlay
I0127 17:03:57.669365 98029 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0127 17:03:57.669512 98029 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0127 17:03:57.779381 98029 main.go:130] libmachine: Using SSH client type: native
I0127 17:03:57.779526 98029 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x10135f500] 0x101362320 [] 0s} 127.0.0.1 50889 }
I0127 17:03:57.779587 98029 main.go:130] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60

[Service]
Type=notify
Restart=on-failure

This file is a systemd drop-in unit that inherits from the base dockerd configuration.

The base configuration already specifies an 'ExecStart=...' command. The first directive

here is to clear out that command inherited from the base configuration. Without this,

the command from the base configuration and the command specified here are treated as

a sequence of commands, which is not the desired behavior, nor is it valid -- systemd

will catch this invalid input and refuse to start the service with an error like:

Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other

container runtimes. If left unlimited, it may result in OOM issues with MySQL.

ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID

Having non-zero Limit*s causes performance problems due to accounting overhead

in the kernel. We recommend using cgroups to do container-local accounting.

LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

Uncomment TasksMax if your systemd version supports it.

Only systemd 226 and above support this version.

TasksMax=infinity
TimeoutStartSec=0

set delegate yes so that systemd does not reset the cgroups of docker containers

Delegate=yes

kill only the docker process, not all processes in the cgroup

KillMode=process

[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0127 17:03:58.264570 98029 main.go:130] libmachine: SSH cmd err, output: : [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60

[Service]
Type=notify
Restart=on-failure

This file is a systemd drop-in unit that inherits from the base dockerd configuration.

The base configuration already specifies an 'ExecStart=...' command. The first directive

here is to clear out that command inherited from the base configuration. Without this,

the command from the base configuration and the command specified here are treated as

a sequence of commands, which is not the desired behavior, nor is it valid -- systemd

will catch this invalid input and refuse to start the service with an error like:

Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other

container runtimes. If left unlimited, it may result in OOM issues with MySQL.

ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID

Having non-zero Limit*s causes performance problems due to accounting overhead

in the kernel. We recommend using cgroups to do container-local accounting.

LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

Uncomment TasksMax if your systemd version supports it.

Only systemd 226 and above support this version.

TasksMax=infinity
TimeoutStartSec=0

set delegate yes so that systemd does not reset the cgroups of docker containers

Delegate=yes

kill only the docker process, not all processes in the cgroup

KillMode=process

[Install]
WantedBy=multi-user.target

I0127 17:03:58.264699 98029 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0127 17:03:58.374185 98029 main.go:130] libmachine: Using SSH client type: native
I0127 17:03:58.374320 98029 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x10135f500] 0x101362320 [] 0s} 127.0.0.1 50889 }
I0127 17:03:58.374328 98029 main.go:130] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0127 17:03:58.619062 98029 cache.go:156] /Users/lxt/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetesui/dashboard_v2.3.1 exists
I0127 17:03:58.619085 98029 cache.go:96] cache image "registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetesui/dashboard:v2.3.1" -> "/Users/lxt/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetesui/dashboard_v2.3.1" took 7.74876125s
I0127 17:03:58.619104 98029 cache.go:80] save to tar file registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetesui/dashboard:v2.3.1 -> /Users/lxt/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetesui/dashboard_v2.3.1 succeeded
I0127 17:03:58.619119 98029 cache.go:87] Successfully saved all images to host disk.
I0127 17:04:00.176822 98029 main.go:130] libmachine: SSH cmd err, output: Process exited with status 1: --- /lib/systemd/system/docker.service 2022-01-27 05:04:41.520395008 +0000
+++ /lib/systemd/system/docker.service.new 2022-01-27 09:03:58.254780014 +0000
@@ -5,9 +5,12 @@
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60

[Service]
Type=notify
+Restart=on-failure

@@ -23,7 +26,7 @@

container runtimes. If left unlimited, it may result in OOM issues with MySQL.

ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
-ExecReload=/bin/kill -s HUP
+ExecReload=/bin/kill -s HUP $MAINPID

Having non-zero Limit*s causes performance problems due to accounting overhead

in the kernel. We recommend using cgroups to do container-local accounting.

Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xe" for details.

I0127 17:04:00.176840 98029 ubuntu.go:195] Error setting container-runtime options during provisioning ssh command error:
command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
err : Process exited with status 1
output : --- /lib/systemd/system/docker.service 2022-01-27 05:04:41.520395008 +0000
+++ /lib/systemd/system/docker.service.new 2022-01-27 09:03:58.254780014 +0000
@@ -5,9 +5,12 @@
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60

[Service]
Type=notify
+Restart=on-failure

@@ -23,7 +26,7 @@

container runtimes. If left unlimited, it may result in OOM issues with MySQL.

ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
-ExecReload=/bin/kill -s HUP
+ExecReload=/bin/kill -s HUP $MAINPID

Having non-zero Limit*s causes performance problems due to accounting overhead

in the kernel. We recommend using cgroups to do container-local accounting.

Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xe" for details.
I0127 17:04:00.176847 98029 machine.go:91] provisioned docker machine in 8.486084916s
I0127 17:04:00.177114 98029 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0127 17:04:00.177198 98029 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0127 17:04:00.304526 98029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50889 SSHKeyPath:/Users/lxt/.minikube/machines/minikube/id_rsa Username:docker}
I0127 17:04:00.548049 98029 fix.go:57] fixHost completed within 9.566904083s
I0127 17:04:00.548067 98029 start.go:80] releasing machines lock for "minikube", held for 9.566948083s
W0127 17:04:00.548381 98029 start.go:566] error starting host: provision: ssh command error:
command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
err : Process exited with status 1
output : --- /lib/systemd/system/docker.service 2022-01-27 05:04:41.520395008 +0000
+++ /lib/systemd/system/docker.service.new 2022-01-27 09:03:58.254780014 +0000
@@ -5,9 +5,12 @@
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60

[Service]
Type=notify
+Restart=on-failure

@@ -23,7 +26,7 @@

container runtimes. If left unlimited, it may result in OOM issues with MySQL.

ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
-ExecReload=/bin/kill -s HUP
+ExecReload=/bin/kill -s HUP $MAINPID

Having non-zero Limit*s causes performance problems due to accounting overhead

in the kernel. We recommend using cgroups to do container-local accounting.

Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xe" for details.
W0127 17:04:00.548629 98029 out.go:241] 🤦 StartHost failed, but will try again: provision: ssh command error:
command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
err : Process exited with status 1
output : --- /lib/systemd/system/docker.service 2022-01-27 05:04:41.520395008 +0000
+++ /lib/systemd/system/docker.service.new 2022-01-27 09:03:58.254780014 +0000
@@ -5,9 +5,12 @@
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60

[Service]
Type=notify
+Restart=on-failure

@@ -23,7 +26,7 @@

container runtimes. If left unlimited, it may result in OOM issues with MySQL.

ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
-ExecReload=/bin/kill -s HUP
+ExecReload=/bin/kill -s HUP $MAINPID

Having non-zero Limit*s causes performance problems due to accounting overhead

in the kernel. We recommend using cgroups to do container-local accounting.

Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xe" for details.

I0127 17:04:00.548707 98029 start.go:581] Will try again in 5 seconds ...
I0127 17:04:05.549880 98029 start.go:313] acquiring machines lock for minikube: {Name:mkb9fb1ef68b31a8385c3923e327f807be3e96b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I0127 17:04:05.550405 98029 start.go:317] acquired machines lock for "minikube" in 429.25µs
I0127 17:04:05.550541 98029 start.go:93] Skipping create...Using existing machine configuration
I0127 17:04:05.550558 98029 fix.go:55] fixHost starting:
I0127 17:04:05.552516 98029 cli_runner.go:133] Run: docker container inspect minikube --format={{.State.Status}}
I0127 17:04:05.701987 98029 fix.go:108] recreateIfNeeded on minikube: state=Running err=
W0127 17:04:05.702018 98029 fix.go:134] unexpected machine state, will restart:
I0127 17:04:05.720937 98029 out.go:176] 🏃 Updating the running docker "minikube" container ...
I0127 17:04:05.720962 98029 machine.go:88] provisioning docker machine ...
I0127 17:04:05.720976 98029 ubuntu.go:169] provisioning hostname "minikube"
I0127 17:04:05.721068 98029 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0127 17:04:05.829178 98029 main.go:130] libmachine: Using SSH client type: native
I0127 17:04:05.829337 98029 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x10135f500] 0x101362320 [] 0s} 127.0.0.1 50889 }
I0127 17:04:05.829342 98029 main.go:130] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0127 17:04:06.307329 98029 main.go:130] libmachine: SSH cmd err, output: : minikube

I0127 17:04:06.307458 98029 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0127 17:04:06.418289 98029 main.go:130] libmachine: Using SSH client type: native
I0127 17:04:06.418468 98029 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x10135f500] 0x101362320 [] 0s} 127.0.0.1 50889 }
I0127 17:04:06.418476 98029 main.go:130] libmachine: About to run SSH command:

	if ! grep -xq '.*\sminikube' /etc/hosts; then
		if grep -xq '127.0.1.1\s.*' /etc/hosts; then
			sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
		else 
			echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; 
		fi
	fi

I0127 17:04:06.762559 98029 main.go:130] libmachine: SSH cmd err, output: :
I0127 17:04:06.762574 98029 ubuntu.go:175] set auth options {CertDir:/Users/lxt/.minikube CaCertPath:/Users/lxt/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/lxt/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/lxt/.minikube/machines/server.pem ServerKeyPath:/Users/lxt/.minikube/machines/server-key.pem ClientKeyPath:/Users/lxt/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/lxt/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/lxt/.minikube}
I0127 17:04:06.762590 98029 ubuntu.go:177] setting up certificates
I0127 17:04:06.762594 98029 provision.go:83] configureAuth start
I0127 17:04:06.762678 98029 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0127 17:04:06.870407 98029 provision.go:138] copyHostCerts
I0127 17:04:06.870508 98029 exec_runner.go:144] found /Users/lxt/.minikube/ca.pem, removing ...
I0127 17:04:06.870512 98029 exec_runner.go:207] rm: /Users/lxt/.minikube/ca.pem
I0127 17:04:06.870615 98029 exec_runner.go:151] cp: /Users/lxt/.minikube/certs/ca.pem --> /Users/lxt/.minikube/ca.pem (1025 bytes)
I0127 17:04:06.870769 98029 exec_runner.go:144] found /Users/lxt/.minikube/cert.pem, removing ...
I0127 17:04:06.870771 98029 exec_runner.go:207] rm: /Users/lxt/.minikube/cert.pem
I0127 17:04:06.870809 98029 exec_runner.go:151] cp: /Users/lxt/.minikube/certs/cert.pem --> /Users/lxt/.minikube/cert.pem (1070 bytes)
I0127 17:04:06.871062 98029 exec_runner.go:144] found /Users/lxt/.minikube/key.pem, removing ...
I0127 17:04:06.871065 98029 exec_runner.go:207] rm: /Users/lxt/.minikube/key.pem
I0127 17:04:06.871110 98029 exec_runner.go:151] cp: /Users/lxt/.minikube/certs/key.pem --> /Users/lxt/.minikube/key.pem (1675 bytes)
I0127 17:04:06.871338 98029 provision.go:112] generating server cert: /Users/lxt/.minikube/machines/server.pem ca-key=/Users/lxt/.minikube/certs/ca.pem private-key=/Users/lxt/.minikube/certs/ca-key.pem org=lxt.minikube san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube minikube]
I0127 17:04:06.920005 98029 provision.go:172] copyRemoteCerts
I0127 17:04:06.920055 98029 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0127 17:04:06.920090 98029 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0127 17:04:07.020564 98029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50889 SSHKeyPath:/Users/lxt/.minikube/machines/minikube/id_rsa Username:docker}
I0127 17:04:07.290961 98029 ssh_runner.go:362] scp /Users/lxt/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1025 bytes)
I0127 17:04:07.563349 98029 ssh_runner.go:362] scp /Users/lxt/.minikube/machines/server.pem --> /etc/docker/server.pem (1143 bytes)
I0127 17:04:07.832377 98029 ssh_runner.go:362] scp /Users/lxt/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0127 17:04:08.103053 98029 provision.go:86] duration metric: configureAuth took 1.3404735s
I0127 17:04:08.103071 98029 ubuntu.go:193] setting minikube options for container-runtime
I0127 17:04:08.103451 98029 config.go:176] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.3
I0127 17:04:08.103565 98029 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0127 17:04:08.213678 98029 main.go:130] libmachine: Using SSH client type: native
I0127 17:04:08.213828 98029 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x10135f500] 0x101362320 [] 0s} 127.0.0.1 50889 }
I0127 17:04:08.213832 98029 main.go:130] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0127 17:04:08.550201 98029 main.go:130] libmachine: SSH cmd err, output: : overlay

I0127 17:04:08.550216 98029 ubuntu.go:71] root file system type: overlay
I0127 17:04:08.550520 98029 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0127 17:04:08.550650 98029 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0127 17:04:08.666739 98029 main.go:130] libmachine: Using SSH client type: native
I0127 17:04:08.666914 98029 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x10135f500] 0x101362320 [] 0s} 127.0.0.1 50889 }
I0127 17:04:08.666959 98029 main.go:130] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60

[Service]
Type=notify
Restart=on-failure

This file is a systemd drop-in unit that inherits from the base dockerd configuration.

The base configuration already specifies an 'ExecStart=...' command. The first directive

here is to clear out that command inherited from the base configuration. Without this,

the command from the base configuration and the command specified here are treated as

a sequence of commands, which is not the desired behavior, nor is it valid -- systemd

will catch this invalid input and refuse to start the service with an error like:

Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other

container runtimes. If left unlimited, it may result in OOM issues with MySQL.

ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID

Having non-zero Limit*s causes performance problems due to accounting overhead

in the kernel. We recommend using cgroups to do container-local accounting.

LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

Uncomment TasksMax if your systemd version supports it.

Only systemd 226 and above support this version.

TasksMax=infinity
TimeoutStartSec=0

set delegate yes so that systemd does not reset the cgroups of docker containers

Delegate=yes

kill only the docker process, not all processes in the cgroup

KillMode=process

[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0127 17:04:09.138057 98029 main.go:130] libmachine: SSH cmd err, output: : [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60

[Service]
Type=notify
Restart=on-failure

This file is a systemd drop-in unit that inherits from the base dockerd configuration.

The base configuration already specifies an 'ExecStart=...' command. The first directive

here is to clear out that command inherited from the base configuration. Without this,

the command from the base configuration and the command specified here are treated as

a sequence of commands, which is not the desired behavior, nor is it valid -- systemd

will catch this invalid input and refuse to start the service with an error like:

Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other

container runtimes. If left unlimited, it may result in OOM issues with MySQL.

ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID

Having non-zero Limit*s causes performance problems due to accounting overhead

in the kernel. We recommend using cgroups to do container-local accounting.

LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

Uncomment TasksMax if your systemd version supports it.

Only systemd 226 and above support this version.

TasksMax=infinity
TimeoutStartSec=0

set delegate yes so that systemd does not reset the cgroups of docker containers

Delegate=yes

kill only the docker process, not all processes in the cgroup

KillMode=process

[Install]
WantedBy=multi-user.target

I0127 17:04:09.138191 98029 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0127 17:04:09.248244 98029 main.go:130] libmachine: Using SSH client type: native
I0127 17:04:09.248403 98029 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x10135f500] 0x101362320 [] 0s} 127.0.0.1 50889 }
I0127 17:04:09.248412 98029 main.go:130] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0127 17:04:09.652850 98029 main.go:130] libmachine: SSH cmd err, output: :
I0127 17:04:09.652873 98029 machine.go:91] provisioned docker machine in 3.93197675s
I0127 17:04:09.652888 98029 start.go:267] post-start starting for "minikube" (driver="docker")
I0127 17:04:09.652899 98029 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0127 17:04:09.653116 98029 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0127 17:04:09.653234 98029 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0127 17:04:09.764702 98029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50889 SSHKeyPath:/Users/lxt/.minikube/machines/minikube/id_rsa Username:docker}
I0127 17:04:10.035086 98029 ssh_runner.go:195] Run: cat /etc/os-release
I0127 17:04:10.076624 98029 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0127 17:04:10.076671 98029 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0127 17:04:10.076686 98029 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0127 17:04:10.076691 98029 info.go:137] Remote host: Ubuntu 19.10
I0127 17:04:10.076703 98029 filesync.go:126] Scanning /Users/lxt/.minikube/addons for local assets ...
I0127 17:04:10.076914 98029 filesync.go:126] Scanning /Users/lxt/.minikube/files for local assets ...
I0127 17:04:10.076988 98029 start.go:270] post-start completed in 424.098958ms
I0127 17:04:10.077080 98029 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0127 17:04:10.077146 98029 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0127 17:04:10.192845 98029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50889 SSHKeyPath:/Users/lxt/.minikube/machines/minikube/id_rsa Username:docker}
I0127 17:04:10.426707 98029 fix.go:57] fixHost completed within 4.876237667s
I0127 17:04:10.426727 98029 start.go:80] releasing machines lock for "minikube", held for 4.876398875s
I0127 17:04:10.426883 98029 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0127 17:04:10.537686 98029 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.cn-hangzhou.aliyuncs.com/google_containers/
I0127 17:04:10.537751 98029 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0127 17:04:10.538369 98029 ssh_runner.go:195] Run: systemctl --version
I0127 17:04:10.538455 98029 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0127 17:04:10.649207 98029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50889 SSHKeyPath:/Users/lxt/.minikube/machines/minikube/id_rsa Username:docker}
I0127 17:04:10.649406 98029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50889 SSHKeyPath:/Users/lxt/.minikube/machines/minikube/id_rsa Username:docker}
I0127 17:04:10.886414 98029 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0127 17:04:11.156145 98029 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0127 17:04:11.278588 98029 cruntime.go:272] skipping containerd shutdown because we are bound to it
I0127 17:04:11.278955 98029 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0127 17:04:11.402497 98029 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
image-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0127 17:04:11.609961 98029 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0127 17:04:12.014242 98029 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0127 17:04:12.405874 98029 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0127 17:04:12.531525 98029 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 17:04:12.918839 98029 ssh_runner.go:195] Run: sudo systemctl start docker
I0127 17:04:13.534544 98029 out.go:176]
W0127 17:04:13.534859 98029 out.go:241] ❌ Exiting due to RUNTIME_ENABLE: sudo systemctl start docker: Process exited with status 1
stdout:

stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xe" for details.

W0127 17:04:13.535076 98029 out.go:241]
W0127 17:04:13.539487 98029 out.go:241] �[31m╭───────────────────────────────────────────────────────────────────────────────────────────╮�[0m
�[31m│�[0m �[31m│�[0m
�[31m│�[0m 😿 If the above advice does not help, please let us know: �[31m│�[0m
�[31m│�[0m 👉 https://github.com/kubernetes/minikube/issues/new/choose �[31m│�[0m
�[31m│�[0m �[31m│�[0m
�[31m│�[0m Please run minikube logs --file=logs.txt and attach logs.txt to the GitHub issue. �[31m│�[0m
�[31m│�[0m �[31m│�[0m
�[31m╰───────────────────────────────────────────────────────────────────────────────────────────╯�[0m

  • ==> Docker <==

  • -- Logs begin at Thu 2022-01-27 09:03:52 UTC, end at Thu 2022-01-27 09:04:48 UTC. --
    Jan 27 09:04:14 minikube dockerd[1108]: time="2022-01-27T09:04:14.138331000Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
    Jan 27 09:04:14 minikube dockerd[1108]: time="2022-01-27T09:04:14.138941000Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc
    Jan 27 09:04:14 minikube dockerd[1108]: time="2022-01-27T09:04:14.139194000Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
    Jan 27 09:04:14 minikube dockerd[1108]: time="2022-01-27T09:04:14.145701000Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0007eeb10, CONNECTING" module=grpc
    Jan 27 09:04:14 minikube dockerd[1108]: time="2022-01-27T09:04:14.151929000Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0007eeb10, READY" module=grpc
    Jan 27 09:04:14 minikube dockerd[1108]: time="2022-01-27T09:04:14.165723000Z" level=info msg="parsed scheme: "unix"" module=grpc
    Jan 27 09:04:14 minikube dockerd[1108]: time="2022-01-27T09:04:14.165786000Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
    Jan 27 09:04:14 minikube dockerd[1108]: time="2022-01-27T09:04:14.165857000Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc
    Jan 27 09:04:14 minikube dockerd[1108]: time="2022-01-27T09:04:14.165919000Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
    Jan 27 09:04:14 minikube dockerd[1108]: time="2022-01-27T09:04:14.166147000Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0001e2500, CONNECTING" module=grpc
    Jan 27 09:04:14 minikube dockerd[1108]: time="2022-01-27T09:04:14.166303000Z" level=info msg="blockingPicker: the picked transport is not ready, loop back to repick" module=grpc
    Jan 27 09:04:14 minikube dockerd[1108]: time="2022-01-27T09:04:14.166884000Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0001e2500, READY" module=grpc
    Jan 27 09:04:14 minikube dockerd[1108]: time="2022-01-27T09:04:14.196453000Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
    Jan 27 09:04:14 minikube dockerd[1108]: time="2022-01-27T09:04:14.215655000Z" level=warning msg="Your kernel does not support cgroup memory limit"
    Jan 27 09:04:14 minikube dockerd[1108]: time="2022-01-27T09:04:14.215751000Z" level=warning msg="Unable to find cpu cgroup in mounts"
    Jan 27 09:04:14 minikube dockerd[1108]: time="2022-01-27T09:04:14.215785000Z" level=warning msg="Unable to find blkio cgroup in mounts"
    Jan 27 09:04:14 minikube dockerd[1108]: time="2022-01-27T09:04:14.215812000Z" level=warning msg="Unable to find cpuset cgroup in mounts"
    Jan 27 09:04:14 minikube dockerd[1108]: time="2022-01-27T09:04:14.215839000Z" level=warning msg="mountpoint for pids not found"
    Jan 27 09:04:14 minikube dockerd[1108]: failed to start daemon: Devices cgroup isn't mounted
    Jan 27 09:04:14 minikube systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
    Jan 27 09:04:14 minikube systemd[1]: docker.service: Failed with result 'exit-code'.
    Jan 27 09:04:14 minikube systemd[1]: Failed to start Docker Application Container Engine.
    Jan 27 09:04:14 minikube systemd[1]: docker.service: Consumed 604ms CPU time.
    Jan 27 09:04:14 minikube systemd[1]: docker.service: Service RestartSec=100ms expired, scheduling restart.
    Jan 27 09:04:14 minikube systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
    Jan 27 09:04:14 minikube systemd[1]: Stopped Docker Application Container Engine.
    Jan 27 09:04:14 minikube systemd[1]: docker.service: Consumed 604ms CPU time.
    Jan 27 09:04:14 minikube systemd[1]: Starting Docker Application Container Engine...
    Jan 27 09:04:14 minikube dockerd[1127]: time="2022-01-27T09:04:14.889706000Z" level=info msg="Starting up"
    Jan 27 09:04:14 minikube dockerd[1127]: time="2022-01-27T09:04:14.920412000Z" level=info msg="parsed scheme: "unix"" module=grpc
    Jan 27 09:04:14 minikube dockerd[1127]: time="2022-01-27T09:04:14.920695000Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
    Jan 27 09:04:14 minikube dockerd[1127]: time="2022-01-27T09:04:14.921405000Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc
    Jan 27 09:04:14 minikube dockerd[1127]: time="2022-01-27T09:04:14.921668000Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
    Jan 27 09:04:14 minikube dockerd[1127]: time="2022-01-27T09:04:14.928415000Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0007597a0, CONNECTING" module=grpc
    Jan 27 09:04:14 minikube dockerd[1127]: time="2022-01-27T09:04:14.935296000Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0007597a0, READY" module=grpc
    Jan 27 09:04:14 minikube dockerd[1127]: time="2022-01-27T09:04:14.948004000Z" level=info msg="parsed scheme: "unix"" module=grpc
    Jan 27 09:04:14 minikube dockerd[1127]: time="2022-01-27T09:04:14.948072000Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
    Jan 27 09:04:14 minikube dockerd[1127]: time="2022-01-27T09:04:14.948144000Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc
    Jan 27 09:04:14 minikube dockerd[1127]: time="2022-01-27T09:04:14.948249000Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
    Jan 27 09:04:14 minikube dockerd[1127]: time="2022-01-27T09:04:14.948480000Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc00019ee70, CONNECTING" module=grpc
    Jan 27 09:04:14 minikube dockerd[1127]: time="2022-01-27T09:04:14.948561000Z" level=info msg="blockingPicker: the picked transport is not ready, loop back to repick" module=grpc
    Jan 27 09:04:14 minikube dockerd[1127]: time="2022-01-27T09:04:14.949025000Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc00019ee70, READY" module=grpc
    Jan 27 09:04:14 minikube dockerd[1127]: time="2022-01-27T09:04:14.977850000Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
    Jan 27 09:04:14 minikube dockerd[1127]: time="2022-01-27T09:04:14.997438000Z" level=warning msg="Your kernel does not support cgroup memory limit"
    Jan 27 09:04:14 minikube dockerd[1127]: time="2022-01-27T09:04:14.997582000Z" level=warning msg="Unable to find cpu cgroup in mounts"
    Jan 27 09:04:14 minikube dockerd[1127]: time="2022-01-27T09:04:14.997616000Z" level=warning msg="Unable to find blkio cgroup in mounts"
    Jan 27 09:04:14 minikube dockerd[1127]: time="2022-01-27T09:04:14.997644000Z" level=warning msg="Unable to find cpuset cgroup in mounts"
    Jan 27 09:04:14 minikube dockerd[1127]: time="2022-01-27T09:04:14.997671000Z" level=warning msg="mountpoint for pids not found"
    Jan 27 09:04:15 minikube dockerd[1127]: failed to start daemon: Devices cgroup isn't mounted
    Jan 27 09:04:15 minikube systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
    Jan 27 09:04:15 minikube systemd[1]: docker.service: Failed with result 'exit-code'.
    Jan 27 09:04:15 minikube systemd[1]: Failed to start Docker Application Container Engine.
    Jan 27 09:04:15 minikube systemd[1]: docker.service: Consumed 611ms CPU time.
    Jan 27 09:04:15 minikube systemd[1]: docker.service: Service RestartSec=100ms expired, scheduling restart.
    Jan 27 09:04:15 minikube systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
    Jan 27 09:04:15 minikube systemd[1]: Stopped Docker Application Container Engine.
    Jan 27 09:04:15 minikube systemd[1]: docker.service: Consumed 611ms CPU time.
    Jan 27 09:04:15 minikube systemd[1]: docker.service: Start request repeated too quickly.
    Jan 27 09:04:15 minikube systemd[1]: docker.service: Failed with result 'exit-code'.
    Jan 27 09:04:15 minikube systemd[1]: Failed to start Docker Application Container Engine.

  • ==> container status <==

  • ==> describe nodes <==

  • ==> dmesg <==

  • [Jan27 09:02] cacheinfo: Unable to detect cache hierarchy for CPU 0
    [ +0.005825] the cryptoloop driver has been deprecated and will be removed in in Linux 5.16
    [ +5.016334] grpcfuse: loading out-of-tree module taints kernel.

  • ==> kernel <==

  • 09:04:51 up 1 min, 0 users, load average: 0.20, 0.06, 0.02
    Linux minikube 5.10.76-linuxkit Need a reliable and low latency local cluster setup for Kubernetes  #1 SMP PREEMPT Mon Nov 8 11:22:26 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
    PRETTY_NAME="Ubuntu 19.10"

  • ==> kubelet <==

  • -- Logs begin at Thu 2022-01-27 09:03:52 UTC, end at Thu 2022-01-27 09:04:51 UTC. --
    -- No entries --

使用的操作系统版本
Mac os 12

@wer8956741 wer8956741 added the l/zh-CN Issues in or relating to Chinese label Jan 27, 2022
@RA489
Copy link

RA489 commented Feb 1, 2022

/kind support

@k8s-ci-robot k8s-ci-robot added the kind/support Categorizes issue or PR as a support question. label Feb 1, 2022
@zhan9san
Copy link
Contributor

zhan9san commented Mar 12, 2022

Hi @wer8956741

Exiting due to RUNTIME_ENABLE: sudo systemctl start docker: Process exited with status 1

是不是docker没有启动呢?
如果minikube的driver是docker,需要先启动docker,再启动minikube

@spowelljr spowelljr added the long-term-support Long-term support issues that can't be fixed in code label Apr 13, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 12, 2022
@RA489
Copy link

RA489 commented Jul 13, 2022

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 13, 2022
@wangyalong
Copy link

请问解决了这个问题吗

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 21, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 21, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Jan 20, 2023
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question. l/zh-CN Issues in or relating to Chinese lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. long-term-support Long-term support issues that can't be fixed in code
Projects
None yet
Development

No branches or pull requests

7 participants