Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docker-compose启动失败 #101

Closed
still-L opened this issue Mar 1, 2019 · 7 comments
Closed

docker-compose启动失败 #101

still-L opened this issue Mar 1, 2019 · 7 comments

Comments

@still-L
Copy link

still-L commented Mar 1, 2019

[root@ceph-client k3s]# docker-compose up --scale node=3
Starting k3s_node_1 ...
Starting k3s_server_1 ...
Starting k3s_node_2 ...
Starting k3s_server_1 ... error
Starting k3s_node_1 ... done
Starting k3s_node_2 ... done
Starting k3s_node_3 ... done
(exit status 1))

ERROR: for server Cannot start service server: driver failed programming external connectivity on endpoint k3s_server_1 (e22cd8969a88f79716a88d6cea5de8bc6b8ca25bdc084297256cd9ab551f2164): (iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 6443 -j DNAT --to-destination 172.18.0.5:6443 ! -i br-b41a9cad8102: iptables: No chain/target/match by that name.
(exit status 1))
ERROR: Encountered errors while bringing up the project.
[root@ceph-client k3s]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4ae38a0b108e rancher/k3s:v0.1.0 "/bin/k3s agent" 2 minutes ago Up 43 seconds k3s_node_2
b70e96deca73 rancher/k3s:v0.1.0 "/bin/k3s agent" 2 minutes ago Up 43 seconds k3s_node_1
e789f47f0e7e rancher/k3s:v0.1.0 "/bin/k3s agent" 2 minutes ago Up 43 seconds k3s_node_3
[root@ceph-client k3s]# docker-compose ps
Name Command State Ports

k3s_node_1 /bin/k3s agent Up
k3s_node_2 /bin/k3s agent Up
k3s_node_3 /bin/k3s agent Up
k3s_server_1 /bin/k3s server --disable- ... Exit 1
[root@ceph-client k3s]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4ae38a0b108e rancher/k3s:v0.1.0 "/bin/k3s agent" 2 minutes ago Up About a minute k3s_node_2
b70e96deca73 rancher/k3s:v0.1.0 "/bin/k3s agent" 2 minutes ago Up About a minute k3s_node_1
e789f47f0e7e rancher/k3s:v0.1.0 "/bin/k3s agent" 2 minutes ago Up About a minute k3s_node_3
92c7cbdfd7e4 rancher/k3s:v0.1.0 "/bin/k3s server -..." 2 minutes ago Exited (1) 2 minutes ago k3s_server_1
[root@ceph-client k3s]# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination

Chain FORWARD (policy ACCEPT)
target prot opt source destination

Chain OUTPUT (policy ACCEPT)
target prot opt source destination

@curx
Copy link
Contributor

curx commented Mar 1, 2019

What kind of distro is used and can you list the loaded iptables modules ?

@superseb
Copy link
Contributor

superseb commented Mar 1, 2019

This usually happens when iptables has been flushed after Docker has been started. Can you cleanup the Docker containers, restart Docker, and then run docker-compose?

@iwaseyusuke
Copy link

I faced the similar problem and flushing Docker did not solve it (my environment; Ubuntu 18.04, Docker 18.09.3).
But the log messages showed some kernel modules could not be loaded, them I added two options as the following, and it seemed to work.

version: '3'
services:
  server:
    # ...(snip)...

  node:
    # ...(snip)...
    volumes:
    - /lib/modules:/lib/modules
    cap_add:
    - ALL

volumes:
  k3s-server: {}

The messages like;

node_1    | E0302 14:20:10.560824       1 summary_sys_containers.go:47] Failed to get system container stats for "/docker/6ec3ef89103f3b98b2b4dcf7e1f23afca9163aee330f26b34097bf83b13fbeb0/kube-proxy": failed to get cgroup stats for "/docker/6ec3ef89103f3b98b2b4dcf7e1f23afca9163aee330f26b34097bf83b13fbeb0/kube-proxy": failed to get container info for "/docker/6ec3ef89103f3b98b2b4dcf7e1f23afca9163aee330f26b34097bf83b13fbeb0/kube-proxy": unknown container "/docker/6ec3ef89103f3b98b2b4dcf7e1f23afca9163aee330f26b34097bf83b13fbeb0/kube-proxy"`

still be shown, but the status of the pods on kube-system namespace was Running.

$ kubectl --kubeconfig kubeconfig.yaml get pods --all-namespaces
NAMESPACE     NAME                         READY   STATUS    RESTARTS   AGE
kube-system   coredns-7748f7f6df-rvz5z     1/1     Running   0          3m19s
kube-system   helm-install-traefik-nmr75   1/1     Running   0          3m18s

Is it the expected behavior?

@still-L
Copy link
Author

still-L commented Mar 4, 2019

使用什么样的发行版,你能列出加载的iptables模块吗?

Linux version 3.10.0-957.1.3.el7.x86_64 (mockbuild@kbuilder.bsys.centos.org) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-36) (GCC) ) #1 SMP

@still-L
Copy link
Author

still-L commented Mar 4, 2019

在Docker启动后刷新iptables时通常会发生这种情况。你可以清理Docker容器,重启Docker,然后运行docker-compose吗?

docker没有运行其他的容器 ,这是一台全新的docker 主机。

@still-L
Copy link
Author

still-L commented Mar 4, 2019

我遇到了类似的问题,并且刷新Docker没有解决它(我的环境; Ubuntu 18.04,Docker 18.09.3)。
但是日志消息显示无法加载某些内核模块,我添加了两个选项,如下所示,它似乎有效。

版本:' 3 '
服务:
   服务器:
     # ...(剪辑)... 

  节点:
     # ...(剪辑)... 
    卷:
    - / lib / modules:/ lib / modules 
    cap_add:
    - 所有

卷:
   k3s -server:{}

消息如;

node_1    | E0302 14:20:10.560824       1 summary_sys_containers.go:47] Failed to get system container stats for "/docker/6ec3ef89103f3b98b2b4dcf7e1f23afca9163aee330f26b34097bf83b13fbeb0/kube-proxy": failed to get cgroup stats for "/docker/6ec3ef89103f3b98b2b4dcf7e1f23afca9163aee330f26b34097bf83b13fbeb0/kube-proxy": failed to get container info for "/docker/6ec3ef89103f3b98b2b4dcf7e1f23afca9163aee330f26b34097bf83b13fbeb0/kube-proxy": unknown container "/docker/6ec3ef89103f3b98b2b4dcf7e1f23afca9163aee330f26b34097bf83b13fbeb0/kube-proxy"`

仍然会显示,但是kube-system命名空间中pod的状态是Running。

$ kubectl --kubeconfig kubeconfig.yaml get pods --all-namespaces 
NAMESPACE NAME READY STATUS RESTARTS AGE 
kube-system coredns-7748f7f6df-rvz5z 1/1 Running 0 3m19s 
kube-system helm-install-traefik-nmr75 1/1 Running 0 3m18s

这是预期的行为吗?
这看起来是正常的。

@brandond
Copy link
Contributor

brandond commented Dec 4, 2020

Closing due to age. Possibly a use case for k3d.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants