Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kube_apiserver_bind_address auto-configures wrong API access endpoint for all-in-one deployments #2051

Closed
bogdando opened this issue Dec 11, 2017 · 5 comments

Comments

@bogdando
Copy link
Contributor

bogdando commented Dec 11, 2017

BUG REPORT

HA docs describe external and local LB as mutual exclusive and impose masters and nodes running on separate nodes.

When using all-in-one (a master is running workloads) and kube_apiserver_bind_address set up,
kube_apiserver_endpoint auto eval fails to detect its value. It puts 127.0.0.1, while there is nothing listens there, and local nginx LB is not configured for masters.

As a fix for loadbalancer_apiserver_localhost=False, we need to switch the endpoint to use kube_apiserver_bind_address instead of 127.0.0.1.
And for the loadbalancer_apiserver_localhost=True case, we prolly need to deploy nginx on masters as well?

Environment:

  • Cloud provider or hardware configuration: n/a

  • OS (printf "$(uname -srm)\n$(cat /etc/os-release)\n"): n/a

  • Version of Ansible (ansible --version): n/a

Kubespray version (commit) (git rev-parse --short HEAD): 2c6255d

Network plugin used: n/a

Copy of your inventory file:

apiserver_loadbalancer_domain_name: <External LB's FQDN>
access_ip : <VIP matching that FQDN>
loadbalancer_apiserver.address: "{{ access_ip}}"
loadbalancer_apiserver_localhost: false
kube_apiserver_bind_address: <private IP>

Command used to invoke ansible: as usual

@mattymo
Copy link
Contributor

mattymo commented Dec 11, 2017

Masters have always pointed to localhost for apiserver. What is the bug? We have an AIO case in CI working.

@hardys
Copy link
Contributor

hardys commented Dec 11, 2017

@mattymo - we're trying to deploy with an external LB (haproxy) and VIP (manged via keepalived), & when you specify loadbalancer_apiserver_localhost: false the generated /etc/kubernetes/*kubeconfig.yaml files all point to localhost, which doesn't seem to work unless you don't set kube_apiserver_bind_address (so the API listens on 0.0.0.0) - we want to set this so haproxy and the kube API can listen on the same port, because that fits the current TripleO loadbalancer implementation better.

@hardys
Copy link
Contributor

hardys commented Dec 11, 2017

Note it's possible we're doing something wrong, but here's what I'm trying:

(undercloud) [stack@undercloud kubespray]$ cat global_vars.yml
kubeconfig_localhost: true
artifacts_dir: '/home/stack/tripleo-VFDQm5-config/kubespray/artifacts'
ignore_assert_errors: False
kubelet_fail_swap_on: True
apiserver_loadbalancer_domain_name: overcloud.internalapi.localdomain
loadbalancer_apiserver_localhost: false
access_ip: 192.168.24.14
loadbalancer_apiserver:
address: 192.168.24.14
port: 6443

(undercloud) [stack@undercloud kubespray]$ cat inventory.yml
kube-master:
hosts:
overcloud-controller-0:
ansible_user: heat-admin
ansible_host: 192.168.24.15
ansible_become: true
ip: 192.168.24.15
kube_apiserver_bind_address: 192.168.24.15

kube-node:

hosts:
overcloud-controller-0:
ansible_user: heat-admin
ansible_host: 192.168.24.15
ansible_become: true
ip: 192.168.24.15

etcd:
children:
kube-master: {}

k8s-cluster:
children:
kube-master: {}
kube-node: {}

You can see the VIP and local bind IP via netstat:

[root@overcloud-controller-0 ~]# netstat -taupen | grep 6443
tcp 0 0 192.168.24.15:6443 0.0.0.0:* LISTEN 0 149231 697/hyperkube
tcp 0 0 192.168.24.15:6443 192.168.24.15:38108 SYN_RECV 0 0 -
tcp 0 0 192.168.24.14:6443 0.0.0.0:* LISTEN 0 49884 15597/haproxy
tcp 0 0 192.168.24.15:6443 192.168.24.15:37494 ESTABLISHED 0 149237 697/hyperkube
tcp 0 0 192.168.24.15:37494 192.168.24.15:6443 ESTABLISHED 0 149233 697/hyperkube

haproxy config like this:

global
daemon
group haproxy
log /dev/log local0
maxconn 20480
pidfile /var/run/haproxy.pid
ssl-default-bind-ciphers !SSLv2:kEECDH:kRSA:kEDH:kPSK:+3DES:!aNULL:!eNULL:!MD5:!EXP:!RC4:!SEED:!IDEA:!DES
ssl-default-bind-options no-sslv3
stats socket /var/lib/haproxy/stats mode 600 level user
stats timeout 2m
user haproxy

defaults
log global
maxconn 4096
mode tcp
retries 3
timeout http-request 10s
timeout queue 2m
timeout connect 10s
timeout client 2m
timeout server 2m
timeout check 10s

listen haproxy.stats
bind 192.168.24.14:1993 transparent
mode http
stats enable
stats uri /
stats auth admin:4VHdwgUjbmBwtY2YfYzv3NJrX

listen kubernetes-master
bind 192.168.24.14:6443 transparent
balance roundrobin
server overcloud-controller-0.internalapi.localdomain 192.168.24.15:6443 check fall 5 inter 2000 rise 2

This fails like http://paste.openstack.org/show/628651/ because the API isn't accessible to kubeadm I think, any pointers appreciated :)

@bogdando
Copy link
Contributor Author

bogdando commented Dec 12, 2017

@mattymo

Masters have always pointed to localhost for apiserver. What is the bug? We have an AIO case in CI working.

With a custom kube_apiserver_bind_address value, there is nothing listens on the master's localhost API port, AFAICT. There is no more a bind for 0.0.0.0/0 and there is no nginx deployed when is_kube_master

@hardys
Copy link
Contributor

hardys commented Dec 12, 2017

Yes as mentioned by @bogdando the issue is we don't want the API to listen to all interfaces, so I guess we need to either configure it to listen to kube_apiserver_bind_address and localhost, confgure haproxy to listen on the vip and localhost, or configure the kubeconfig files to point at the VIP.

Configuring everything to point at the VIP would be most consistent with how we handle the OpenStack services, but I'm not clear if that would be acceptable for the k8s masters, not yet tried it.

@bogdando bogdando removed the feature label Dec 12, 2017
@bogdando bogdando changed the title Kube_apiserver_bind_address auto-configures wrong API access endpoint for all-in-one deployments [WIP] Kube_apiserver_bind_address auto-configures wrong API access endpoint for all-in-one deployments Dec 15, 2017
@bogdando bogdando changed the title [WIP] Kube_apiserver_bind_address auto-configures wrong API access endpoint for all-in-one deployments Kube_apiserver_bind_address auto-configures wrong API access endpoint for all-in-one deployments Dec 15, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants