Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

1.13.2 HA 使用vip访问服务,间接性无法访问 #38

Closed
zhaoli2333 opened this issue Jan 16, 2019 · 4 comments
Closed

1.13.2 HA 使用vip访问服务,间接性无法访问 #38

zhaoli2333 opened this issue Jan 16, 2019 · 4 comments

Comments

@zhaoli2333
Copy link

按照 https://sealyun.com/post/sealos/ 中的步骤进行安装,
1.修改了容器中的roles/etcd/templates/kubeletonce.service.j2文件,增加启动参数--cgroup-driver=systemd。
2.修改hosts文件
[k8s-master]
10.8.8.21 name=node01 order=1 role=master lb=MASTER lbname=lbmaster priority=100
10.8.8.22 name=node02 order=2 role=master lb=BACKUP lbname=lbbackup priority=80
10.8.8.23 name=node03 order=3 role=master

[k8s-node]
#10.1.86.207 name=node04 role=node

[k8s-all:children]
k8s-master
k8s-node

[all:vars]
vip=10.8.8.19
k8s_version=1.13.2
ip_interface=enp.*
etcd_crts=["ca-key.pem","ca.pem","client-key.pem","client.pem","member1-key.pem","member1.pem","server-key.pem","server.pem","ca.csr","client.csr","member1.csr","server.csr"]
k8s_crts=["apiserver.crt","apiserver-kubelet-client.crt","ca.crt", "front-proxy-ca.key","front-proxy-client.key","sa.pub", "apiserver.key","apiserver-kubelet-client.key", "ca.key", "front-proxy-ca.crt", "front-proxy-client.crt" , "sa.key"]

3.启动安装
ansible-playbook roles/install-all.yaml

4.用火狐浏览器访问 https://10.8.8.19:32000,不断刷新页面,有时很快,有时要等很久。
访问https://10.8.8.21:32000, 不断刷新页面,没有问题。

5.kube-proxy日志

I0116 03:04:23.878659 1 graceful_termination.go:171] Not deleting, RS 10.8.8.19:32000/TCP/100.80.37.65:8443: 2 ActiveConn, 2 InactiveConn
I0116 03:05:19.221613 1 graceful_termination.go:160] Trying to delete rs: 10.8.8.19:32000/TCP/100.80.37.65:8443
I0116 03:05:19.221667 1 graceful_termination.go:171] Not deleting, RS 10.8.8.19:32000/TCP/100.80.37.65:8443: 1 ActiveConn, 0 InactiveConn
I0116 03:05:19.221714 1 graceful_termination.go:160] Trying to delete rs: 10.8.8.19:32001/TCP/100.80.37.72:3000
I0116 03:05:19.221738 1 graceful_termination.go:174] Deleting rs: 10.8.8.19:32001/TCP/100.80.37.72:3000
I0116 03:05:19.221770 1 graceful_termination.go:160] Trying to delete rs: 10.8.8.19:30000/TCP/100.80.37.74:3000
I0116 03:05:19.221789 1 graceful_termination.go:174] Deleting rs: 10.8.8.19:30000/TCP/100.80.37.74:3000
I0116 03:05:23.878734 1 graceful_termination.go:160] Trying to delete rs: 10.8.8.19:32000/TCP/100.80.37.65:8443
I0116 03:05:23.878834 1 graceful_termination.go:171] Not deleting, RS 10.8.8.19:32000/TCP/100.80.37.65:8443: 1 ActiveConn, 0 InactiveConn
I0116 03:06:19.332984 1 graceful_termination.go:160] Trying to delete rs: 10.8.8.19:32000/TCP/100.80.37.65:8443
I0116 03:06:19.333041 1 graceful_termination.go:171] Not deleting, RS 10.8.8.19:32000/TCP/100.80.37.65:8443: 1 ActiveConn, 0 InactiveConn
I0116 03:06:19.333105 1 graceful_termination.go:160] Trying to delete rs: 10.8.8.19:32001/TCP/100.80.37.72:3000
I0116 03:06:19.333132 1 graceful_termination.go:174] Deleting rs: 10.8.8.19:32001/TCP/100.80.37.72:3000
I0116 03:06:19.333228 1 graceful_termination.go:160] Trying to delete rs: 10.8.8.19:30000/TCP/100.80.37.74:3000
I0116 03:06:19.333250 1 graceful_termination.go:174] Deleting rs: 10.8.8.19:30000/TCP/100.80.37.74:3000
I0116 03:06:23.879766 1 graceful_termination.go:160] Trying to delete rs: 10.8.8.19:32000/TCP/100.80.37.65:8443
I0116 03:06:23.880183 1 graceful_termination.go:171] Not deleting, RS 10.8.8.19:32000/TCP/100.80.37.65:8443: 1 ActiveConn, 0 InactiveConn
I0116 03:07:23.880408 1 graceful_termination.go:160] Trying to delete rs: 10.8.8.19:32000/TCP/100.80.37.65:8443
I0116 03:07:23.880681 1 graceful_termination.go:171] Not deleting, RS 10.8.8.19:32000/TCP/100.80.37.65:8443: 1 ActiveConn, 0 InactiveConn
I0116 03:08:49.684462 1 graceful_termination.go:160] Trying to delete rs: 10.8.8.19:32001/TCP/100.80.37.72:3000
I0116 03:08:49.684496 1 graceful_termination.go:174] Deleting rs: 10.8.8.19:32001/TCP/100.80.37.72:3000
I0116 03:08:49.684525 1 graceful_termination.go:160] Trying to delete rs: 10.8.8.19:32000/TCP/100.80.37.65:8443
I0116 03:08:49.684540 1 graceful_termination.go:171] Not deleting, RS 10.8.8.19:32000/TCP/100.80.37.65:8443: 3 ActiveConn, 0 InactiveConn
I0116 03:08:49.684576 1 graceful_termination.go:160] Trying to delete rs: 10.8.8.19:30000/TCP/100.80.37.74:3000
I0116 03:08:49.684592 1 graceful_termination.go:174] Deleting rs: 10.8.8.19:30000/TCP/100.80.37.74:3000
I0116 03:09:23.881010 1 graceful_termination.go:160] Trying to delete rs: 10.8.8.19:32000/TCP/100.80.37.65:8443
I0116 03:09:23.881259 1 graceful_termination.go:171] Not deleting, RS 10.8.8.19:32000/TCP/100.80.37.65:8443: 3 ActiveConn, 0 InactiveConn

I0116 03:10:49.984058 1 graceful_termination.go:160] Trying to delete rs: 10.8.8.19:30000/TCP/100.80.37.74:3000
I0116 03:10:49.984084 1 graceful_termination.go:174] Deleting rs: 10.8.8.19:30000/TCP/100.80.37.74:3000
I0116 03:10:49.984137 1 graceful_termination.go:160] Trying to delete rs: 10.8.8.19:32000/TCP/100.80.37.65:8443
I0116 03:10:49.984158 1 graceful_termination.go:171] Not deleting, RS 10.8.8.19:32000/TCP/100.80.37.65:8443: 2 ActiveConn, 0 InactiveConn
I0116 03:10:49.984201 1 graceful_termination.go:160] Trying to delete rs: 10.8.8.19:32001/TCP/100.80.37.72:3000
I0116 03:10:49.984222 1 graceful_termination.go:174] Deleting rs: 10.8.8.19:32001/TCP/100.80.37.72:3000
I0116 03:11:50.100714 1 graceful_termination.go:160] Trying to delete rs: 10.8.8.19:32001/TCP/100.80.37.72:3000
I0116 03:11:50.100741 1 graceful_termination.go:174] Deleting rs: 10.8.8.19:32001/TCP/100.80.37.72:3000
I0116 03:11:50.100787 1 graceful_termination.go:160] Trying to delete rs: 10.8.8.19:30000/TCP/100.80.37.74:3000
I0116 03:11:50.100804 1 graceful_termination.go:174] Deleting rs: 10.8.8.19:30000/TCP/100.80.37.74:3000
I0116 03:11:50.100833 1 graceful_termination.go:160] Trying to delete rs: 10.8.8.19:32000/TCP/100.80.37.65:8443
I0116 03:11:50.100854 1 graceful_termination.go:171] Not deleting, RS 10.8.8.19:32000/TCP/100.80.37.65:8443: 3 ActiveConn, 0 InactiveConn
I0116 03:12:23.881960 1 graceful_termination.go:160] Trying to delete rs: 10.8.8.19:32000/TCP/100.80.37.65:8443
I0116 03:12:23.882199 1 graceful_termination.go:171] Not deleting, RS 10.8.8.19:32000/TCP/100.80.37.65:8443: 3 ActiveConn, 0 InactiveConn

@fanux
Copy link
Member

fanux commented Jan 16, 2019

你这感觉还不是k8s的锅, vip是keepalived提供的

@zhaoli2333
Copy link
Author

keepalived日志

[root@master1 cat]# kubectl logs -f keepalived-master1 -n kube-system
2019-01-16 03:02:12,946 CRIT Supervisor is running as root. Privileges were not dropped because no user is specified in the config file. If you intend to run as root, you can set user=root in the config file to avoid this message.
2019-01-16 03:02:13,098 INFO supervisord started with pid 1
2019-01-16 03:02:14,100 INFO spawned: 'keepalived' with pid 10
2019-01-16 03:02:14,180 INFO exited: keepalived (exit status 0; not expected)
2019-01-16 03:02:15,182 INFO spawned: 'keepalived' with pid 19
2019-01-16 03:02:15,186 INFO exited: keepalived (exit status 0; not expected)
2019-01-16 03:02:17,189 INFO spawned: 'keepalived' with pid 25
2019-01-16 03:02:17,195 INFO exited: keepalived (exit status 0; not expected)
2019-01-16 03:02:20,199 INFO spawned: 'keepalived' with pid 31
2019-01-16 03:02:20,207 INFO exited: keepalived (exit status 0; not expected)
2019-01-16 03:02:21,209 INFO gave up: keepalived entered FATAL state, too many start retries too quickly

[root@master1 cat]# kubectl logs -f keepalived-master2 -n kube-system
2019-01-16 03:02:14,858 CRIT Supervisor is running as root. Privileges were not dropped because no user is specified in the config file. If you intend to run as root, you can set user=root in the config file to avoid this message.
2019-01-16 03:02:14,910 INFO supervisord started with pid 1
2019-01-16 03:02:15,912 INFO spawned: 'keepalived' with pid 8
2019-01-16 03:02:15,966 INFO exited: keepalived (exit status 0; not expected)
2019-01-16 03:02:16,967 INFO spawned: 'keepalived' with pid 17
2019-01-16 03:02:17,985 INFO success: keepalived entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2019-01-16 03:02:17,985 INFO exited: keepalived (exit status 0; expected)

@fanux
Copy link
Member

fanux commented Jan 17, 2019

这keepalived是凉了的节奏啊 keepalived进程应该停了吧

@fanux
Copy link
Member

fanux commented Jan 18, 2019

已经更新了keepalived镜像,可以使用fanux/sealos:1.13.2试试

@fanux fanux closed this as completed Jan 18, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants