Skip to content
This repository has been archived by the owner on Jul 13, 2018. It is now read-only.

iftech-engineering/kube-cn

Repository files navigation

kube-cn

easy kubernetes deployment / management with ansible in GFW.

Supported kubernetes version

Now only Kubernetes 1.8.2 is tested.

Submodules

System requirements:

  • Now only AWS is supported
  • Tested OS and kernel: CentOS 7 with kernel >= 4.12
  • See requirements.txt

CAUTION for AMI for kubernetes machines

  • If you don't want to use proxy, don't set http_proxy, https_proxy or no_proxy to empty string. Comment them out or you'll get a wrong configuration file in /etc/systemd/system/docker.service.d;
  • DONOT enable docker.service with systemctl enable docker.service before add the machine to cluster, or our docker configuration will not take effect.

Deploy and config

  1. Choose an unique environment name (will be referred to as <env> later) since this tool supports deployment of multiple environments.
  2. Export environment variables: export kenv=<env> && export KUBECONFIG=~/.kube/env-$kenv/config
  3. Install requirements: pip install -r requirements.txt
  4. Launch machines
    • etcd machines: m4.large * 3 tagged with k8s-group=etcd
    • master machines: c4.large * 2 tagged with k8s-group=kube-master
    • node machines: anytype, at least one, tagged with k8s-group=kube-node,k8s-node-role=<role>
  5. Tag above machines with: ansible-app=ansible-k8s,k8s-env=<env>
  6. (Optional) Tag node machines with: k8s-node-role=<role>, which will make the nodes be labeled with role=<role>
  7. Add apiserver (master instances) behind a loadbalancer
  8. Modify vars in ans/inventory/group_vars/all.yml (Optional, you can also set them as extra vars in the next 2 steps)
    • apiserver_loadbalancer_domain_name: address of the loadbalancer for apiserver
    • loadbalancer_apiserver.address: same as above
    • loadbalancer_apiserver.port
    • bootstrap_os
  9. Deploy: ansible-playbook -i inventory/inv-ec2.py -u <username> -b kubespray/cluster.yml
  10. Scale: ansible-playbook -i inventory/inv-ec2.py -u <username> -b kubespray/scale.yml
  11. Copy kubeconfig: ansible-playbook -i inventory/inv-ec2.py -u <username> playbooks/kubeconfig.yml
  12. Check cluster is running: kubectl cluster-info && kubectl get nodes --show-labels
  13. To manage multiple environments with kubectl, you have several choices since kubeconfig is copied to ~/.kube/env-<env> (will be referred to as <home> later)
    • specify kubeconfig: kubectl --kubeconfig <home> ...
    • export one time then call kubectl freely in current terminal session: export KUBECONFIG=<home>/config
    • make symlink manually: ln -sf <home>/config ~/.kube/config && ln -sf <home>/ssl ~/.kube/ssl
  14. To remove node:
    1. kubectl drain <node>
    2. kubectl delete node <node>
    3. on master: calicoctl delete node <node>
    4. (Optional) tools/aws.py detach -g <group> <node>
    5. (Optional) tools/aws.py terminate <node>

Other works

  1. Expand disk if necessary
    • e.g. xfs_growfs /var/lib/docker
  2. Label nodes tagged with k8s-node-role with the role and other labels:
    • ansible-playbook -i inventory/inv-ec2.py -u <username> playbooks/label.yml
    • kubectl get nodes --show-labels (now nodes have more labels)
  3. To mark nodes as deployed:
    • tools/aws.py tag <list of nodes separated by blank or comma>

TODO

  • ELB may not get ready immediately after apiserver is up, thus the boot may fail in the middle. Wait until it is ready and try again will solve the problem.

Releases

No releases published

Packages

No packages published

Languages