Skip to content

Commit

Permalink
Stash
Browse files Browse the repository at this point in the history
  • Loading branch information
NicolasT committed Apr 26, 2018
1 parent 4c6cc3e commit 9e3ca9b
Showing 1 changed file with 89 additions and 25 deletions.
114 changes: 89 additions & 25 deletions docs/usage/quickstart.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,23 +7,65 @@ one disk available to provision storage volumes.

.. todo:: Give some sizing examples

.. _MetalK8s: https://github.com/scality/metal-k8s/
.. _Kubernetes: https://kubernetes.io

Defining an Inventory
---------------------
To tell the Ansible_-based deployment system on which machines MetalK8s should
be installed, a so-called *inventory_* needs to be provided. This inventory
contains a file listing all the hosts comprising the cluster, as well as some
configuration.

Create a directory, e.g. ``inventory/quickstart-cluster``, and create the
following files in it:
.. _Ansible: https://www.ansible.com

First, create a directory, e.g. ``inventory/quickstart-cluster``, in which the
inventory will be stored. For our setup, we need to create two files. One
listing all the hosts, aptly ``hosts``:

.. code-block:: ini
node-01 ansible_host=10.0.0.1 ansible_user=centos
node-02 ansible_host=10.0.0.2 ansible_user=centos
node-03 ansible_host=10.0.0.3 ansible_user=centos
[kube-master]
node-01
node-02
node-03
[etcd]
node-01
node-02
node-03
[kube-node]
node-01
node-02
node-03
[k8s-cluster:children]
kube-node
kube-master
``hosts``::
Make sure to change IP-addresses, usernames etc. according to your
infrastructure.

foo
In a second file, called ``kube-node.yml`` in a ``group_vars`` subdirectory of
our inventory, we declare how to setup storage (in the default configuration)
on hosts in the *kube-node* group, i.e. hosts on which Pods will be
scheduled:

``group_vars/kube-node.yaml``::
.. code-block:: yaml
bar
metal_k8s_lvm:
vgs:
kubevg:
drives: ['/dev/vdb']
In the above, we assume every *kube-node* host has a disk available as
``/dev/vdb`` which can be used to set up Kubernetes *PersistentVolumes*. For
more information about storage, see :doc:`../architecture/storage`.

Entering the MetalK8s Shell
---------------------------
Expand Down Expand Up @@ -55,35 +97,32 @@ to contact the cluster *kube-master* nodes, and authenticate properly::

(metal-k8s) $ export KUBECONFIG=`pwd`/inventory/quickstart-cluster/artifacts/admin.conf

.. todo:: The above is only correct after #44 lands
.. todo:: The above is only correct after `#44 <https://github.com/scality/metal-k8s/pull/44>`_ lands

Now, assuming port *6443* on the first *kube-master* node is reachable from your
system, we can e.g. list the nodes::

(metal-k8s) $ kubectl get nodes
NAME STATUS ROLES AGE VERSION
metalk8s-node-01 Ready node 1d v1.9.5+coreos.0
metalk8s-node-02 Ready node 1d v1.9.5+coreos.0
metalk8s-node-03 Ready node 1d v1.9.5+coreos.0
NAME STATUS ROLES AGE VERSION
node-01 Ready master,node 1m v1.9.5+coreos.0
node-02 Ready master,node 1m v1.9.5+coreos.0
node-03 Ready master,node 1m v1.9.5+coreos.0

or list all pods::

(metal-k8s) $ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-ingress nginx-ingress-controller-9d8jh 1/1 Running 0 21h
kube-ingress nginx-ingress-controller-d7vvg 1/1 Running 0 21h
kube-ingress nginx-ingress-controller-m8jpq 1/1 Running 0 21h
kube-ingress nginx-ingress-controller-scbhx 1/1 Running 0 21h
kube-ingress nginx-ingress-controller-tvp4r 1/1 Running 0 21h
kube-ingress nginx-ingress-default-backend-6664bc64c9-xsws5 1/1 Running 0 21h
kube-ops alertmanager-kube-prometheus-0 2/2 Running 0 22h
kube-ops alertmanager-kube-prometheus-1 2/2 Running 0 22h
kube-ops curator-1524700860-jl2kr 0/1 Completed 0 19h
kube-ops es-client-7cf569f5d8-2z974 1/1 Running 0 22h
kube-ops es-client-7cf569f5d8-qq4h2 1/1 Running 0 22h
kube-ops es-data-cd5446fff-pkmhn 1/1 Running 0 22h
kube-ops es-data-cd5446fff-zzd2h 1/1 Running 0 22h
kube-ops es-exporter-elasticsearch-exporter-7df5bcf58b-k9fdd 1/1 Running 3 1d
kube-ingress nginx-ingress-controller-9d8jh 1/1 Running 0 1m
kube-ingress nginx-ingress-controller-d7vvg 1/1 Running 0 1m
kube-ingress nginx-ingress-controller-m8jpq 1/1 Running 0 1m
kube-ingress nginx-ingress-default-backend-6664bc64c9-xsws5 1/1 Running 0 1m
kube-ops alertmanager-kube-prometheus-0 2/2 Running 0 2m
kube-ops alertmanager-kube-prometheus-1 2/2 Running 0 2m
kube-ops es-client-7cf569f5d8-2z974 1/1 Running 0 2m
kube-ops es-client-7cf569f5d8-qq4h2 1/1 Running 0 2m
kube-ops es-data-cd5446fff-pkmhn 1/1 Running 0 2m
kube-ops es-data-cd5446fff-zzd2h 1/1 Running 0 2m
kube-ops es-exporter-elasticsearch-exporter-7df5bcf58b-k9fdd 1/1 Running 3 1m
...

Similarly, we can list all deployed Helm_ applications::
Expand All @@ -98,3 +137,28 @@ Similarly, we can list all deployed Helm_ applications::
nginx-ingress 3 Wed Apr 25 23:09:09 2018 DEPLOYED nginx-ingress-0.11.1 kube-ingress
prometheus-operator 3 Wed Apr 25 23:09:14 2018 DEPLOYED prometheus-operator-0.0.15 kube-ops

.. _Helm: https://www.helm.sh

Access to dashboard, Grafana and Kibana
---------------------------------------
Once the cluster is running, you can access the `Kubernetes dashboard`_,
Grafana_ metrics and Kibana_ logs from your browser.

The Kubernetes dashboard is available at https://master-ip:6443/ui, where
*master-ip* should be replaced by the IP-address of one of the hosts in the
*kube-master* group.

Grafana can be accessed at http://node-ip/_/grafana, with *node-ip* replaced
by the IP-address of one of the hosts in the *kube-node* group.

Similarly, Kibana can be accessed at http://node-ip/_/kibana. When accessing
this service for the first time, set up an *index pattern* for the
``logstash-*`` index, using the ``@timestamp`` field as *Time Filter field
name*.

See :doc:`../architecture/services` for more information about these services
and their configuration.

.. _Kubernetes dashboard: https://github.com/kubernetes/dashboard
.. _Grafana: https://grafana.com
.. _Kibana: https://www.elastic.co/products/kibana/

0 comments on commit 9e3ca9b

Please sign in to comment.