Skip to content

Commit

Permalink
docs: refactor installation validation steps
Browse files Browse the repository at this point in the history
In cilium#15979, the old `k8s-install-validate.rst` and `k8s-install-connectivity-test.rst`
were refactored to use the CLI, which broke the flow of several pages:
in particular, all installations based on Helm were half-broken due to
referencing Cilium CLI commands when the user was never instructed to
install it.

This commit moves all CLI-related operations to independent `cli-*.rst`,
and then refactors `k8s-install-validate.rst` to have both the new CLI
status check and connectivity test and the older manual status check and
connectivity test.

It then refactors CLI-based installation guides to use the `cli-*.rst`
in the order that makes the most sense for each page.

Signed-off-by: Nicolas Busseneau <nicolas@isovalent.com>
  • Loading branch information
nbusseneau committed May 20, 2021
1 parent 8f1f1dc commit e304b0f
Show file tree
Hide file tree
Showing 23 changed files with 137 additions and 65 deletions.
2 changes: 1 addition & 1 deletion Documentation/gettingstarted/alibabacloud-eni.rst
Original file line number Diff line number Diff line change
Expand Up @@ -165,8 +165,8 @@ Deploy Cilium release via Helm:
the security groups for pod ENIs are derived from the primary ENI
(``eth0``).


.. include:: k8s-install-validate.rst

.. include:: next-steps.rst

.. _alibabacloud_eni_limitations:
Expand Down
Original file line number Diff line number Diff line change
@@ -1,17 +1,9 @@
Deploy the connectivity test
----------------------------

Run the following command to validate that your cluster has proper network
connectivity:

.. code-block:: shell-session
cilium connectivity test
The output should be similar to the following one:

::

$ cilium connectivity test
ℹ️ Monitor aggregation detected, will skip some flow validation steps
✨ [k8s-cluster] Creating namespace for connectivity check...
(...)
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
Install the latest version of the Cilium CLI on your local machine. The Cilium
CLI can be used to install Cilium, inspect the state of a Cilium installation,
and enable/disable a variety of functionality.
Install the latest version of the Cilium CLI. The Cilium CLI can be used to
install Cilium, inspect the state of a Cilium installation, and enable/disable
various features (e.g. clustermesh, Hubble).

.. tabs::
.. group-tab:: Linux
Expand Down
18 changes: 18 additions & 0 deletions Documentation/gettingstarted/cli-status.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
To validate that Cilium has been properly installed, you can run

.. code-block:: shell-session
$ cilium status --wait
/¯¯\
/¯¯\__/¯¯\ Cilium: OK
\__/¯¯\__/ Operator: OK
/¯¯\__/¯¯\ Hubble: disabled
\__/¯¯\__/ ClusterMesh: disabled
\__/
DaemonSet cilium Desired: 2, Ready: 2/2, Available: 2/2
Deployment cilium-operator Desired: 2, Ready: 2/2, Available: 2/2
Containers: cilium-operator Running: 2
cilium Running: 2
Image versions cilium quay.io/cilium/cilium:v1.9.5: 2
cilium-operator quay.io/cilium/operator-generic:v1.9.5: 2
2 changes: 1 addition & 1 deletion Documentation/gettingstarted/clustermesh/clustermesh.rst
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ Cluster Addressing Requirements
Install the Cilium CLI
======================

.. include:: ../install-cli.rst
.. include:: ../cli-download.rst

Prepare the Clusters
####################
Expand Down
1 change: 1 addition & 0 deletions Documentation/gettingstarted/cni-chaining-azure-cni.rst
Original file line number Diff line number Diff line change
Expand Up @@ -96,6 +96,7 @@ This will create both the main cilium daemonset, as well as the cilium-node-init
existing Azure CNI plugin to run in 'transparent' mode.

.. include:: k8s-install-restart-pods.rst

.. include:: k8s-install-validate.rst

.. include:: next-steps.rst
1 change: 1 addition & 0 deletions Documentation/gettingstarted/cni-chaining-calico.rst
Original file line number Diff line number Diff line change
Expand Up @@ -94,5 +94,6 @@ Deploy Cilium release via Helm:
them.

.. include:: k8s-install-validate.rst

.. include:: next-steps.rst

1 change: 1 addition & 0 deletions Documentation/gettingstarted/cni-chaining-weave.rst
Original file line number Diff line number Diff line change
Expand Up @@ -83,5 +83,6 @@ Deploy Cilium release via Helm:
them.

.. include:: k8s-install-validate.rst

.. include:: next-steps.rst

9 changes: 7 additions & 2 deletions Documentation/gettingstarted/k3s.rst
Original file line number Diff line number Diff line change
Expand Up @@ -58,14 +58,19 @@ On each node, run the following to mount the eBPF Filesystem:
Install Cilium
==============

.. include:: install-cli.rst
.. include:: cli-download.rst

Install Cilium by running:

.. code-block:: shell-session
cilium install
.. include:: k8s-install-validate.rst
Validate the Installation
=========================

.. include:: cli-status.rst
.. include:: cli-connectivity-test.rst

.. include:: next-steps.rst

16 changes: 10 additions & 6 deletions Documentation/gettingstarted/k8s-install-default.rst
Original file line number Diff line number Diff line change
Expand Up @@ -23,11 +23,6 @@ to the :ref:`k8s_install_advanced` guide.
Should you encounter any issues during the installation, please refer to the
:ref:`troubleshooting_k8s` section and / or seek help on the `Slack channel`.

Install the Cilium CLI
======================

.. include:: install-cli.rst

Create the Cluster
===================

Expand Down Expand Up @@ -104,6 +99,11 @@ to create a Kubernetes cluster locally or using a managed Kubernetes service:
minikube start --network-plugin=cni
Install the Cilium CLI
======================

.. include:: cli-download.rst

Install Cilium
==============

Expand Down Expand Up @@ -225,6 +225,10 @@ pods are failing to be deployed.
was deployed and the installer has automatically restarted them to ensure
all pods get networking provided by Cilium.

.. include:: k8s-install-validate.rst
Validate the Installation
=========================

.. include:: cli-status.rst
.. include:: cli-connectivity-test.rst

.. include:: next-steps.rst
1 change: 1 addition & 0 deletions Documentation/gettingstarted/k8s-install-external-etcd.rst
Original file line number Diff line number Diff line change
Expand Up @@ -93,4 +93,5 @@ of http for the etcd endpoint URLs:
--set "etcd.endpoints[2]=https://etcd-endpoint3:2379"
.. include:: k8s-install-validate.rst

.. include:: next-steps.rst
2 changes: 2 additions & 0 deletions Documentation/gettingstarted/k8s-install-helm.rst
Original file line number Diff line number Diff line change
Expand Up @@ -237,5 +237,7 @@ Install Cilium
--namespace $CILIUM_NAMESPACE
.. include:: k8s-install-restart-pods.rst

.. include:: k8s-install-validate.rst

.. include:: next-steps.rst
4 changes: 2 additions & 2 deletions Documentation/gettingstarted/k8s-install-kops.rst
Original file line number Diff line number Diff line change
Expand Up @@ -128,7 +128,7 @@ You may be prompted to create a ssh public-private key pair.
(Please see :ref:`appendix_kops`)

.. include:: k8s-install-connectivity-test.rst
.. include:: k8s-install-validate.rst

.. _appendix_kops:

Expand Down Expand Up @@ -164,4 +164,4 @@ The following section explains all the flags used in create cluster command.
* ``--zones eu-west-1a,eu-west-1b,eu-west-1c`` : Zones where the worker nodes will be deployed
* ``--networking cilium`` : Networking CNI plugin to be used - cilium. You can also use ``cilium-etcd``, which will use a dedicated etcd cluster as key/value store instead of CRDs.
* ``--cloud-labels "Team=Dev,Owner=Admin"`` : Labels for your cluster that will be applied to your instances
* ``${NAME}`` : Name of the cluster. Make sure the name ends with k8s.local for a gossip based cluster
* ``${NAME}`` : Name of the cluster. Make sure the name ends with k8s.local for a gossip based cluster
1 change: 1 addition & 0 deletions Documentation/gettingstarted/k8s-install-kubeadm.rst
Original file line number Diff line number Diff line change
Expand Up @@ -59,4 +59,5 @@ Deploy Cilium release via Helm:
helm install cilium |CHART_RELEASE| --namespace kube-system
.. include:: k8s-install-validate.rst

.. include:: next-steps.rst
9 changes: 1 addition & 8 deletions Documentation/gettingstarted/k8s-install-kubespray.rst
Original file line number Diff line number Diff line change
Expand Up @@ -163,14 +163,7 @@ To check if cluster is created successfully, ssh into the bastion host with the
Execute the commands below from the bastion host. If ``kubectl`` isn't installed on the bastion host, you can login to the master node to test the below commands. You may need to copy the private key to the bastion host to access the master node.
.. code:: bash
$ kubectl get nodes
$ kubectl get pods -n kube-system
You should see that nodes are in ``Ready`` state and Cilium pods are in ``Running`` state
.. include:: k8s-install-connectivity-test.rst
.. include:: k8s-install-validate.rst
Delete Cluster
==============
Expand Down
11 changes: 4 additions & 7 deletions Documentation/gettingstarted/k8s-install-openshift-okd.rst
Original file line number Diff line number Diff line change
Expand Up @@ -299,17 +299,14 @@ it sets only ``allowHostPorts`` and ``allowHostNetwork`` without any other privi
groups: null
EOF
.. include:: k8s-install-connectivity-test.rst
Deploy the connectivity test
----------------------------

.. include:: kubectl-connectivity-test.rst

Cleanup after connectivity test
-------------------------------

Remove the ``cilium-test`` namespace:

.. code-block:: shell-session
kubectl delete ns cilium-test
Remove the ``SecurityContextConstraints``:

.. code-block:: shell-session
Expand Down
4 changes: 3 additions & 1 deletion Documentation/gettingstarted/k8s-install-rke.rst
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ Deploy Cilium
.. group-tab:: Cilium CLI

.. include:: install-cli.rst
.. include:: cli-download.rst

Install Cilium by running:

Expand All @@ -66,5 +66,7 @@ Deploy Cilium
cilium install
.. include:: k8s-install-restart-pods.rst

.. include:: k8s-install-validate.rst

.. include:: next-steps.rst
31 changes: 8 additions & 23 deletions Documentation/gettingstarted/k8s-install-validate.rst
Original file line number Diff line number Diff line change
@@ -1,30 +1,15 @@
Validate the Installation
=========================

.. include:: install-cli.rst
.. tabs::

To validate that Cilium has been properly installed, you can run
.. tab:: Cilium CLI

.. code-block:: shell-session
.. include:: cli-download.rst
.. include:: cli-status.rst
.. include:: cli-connectivity-test.rst

cilium status --wait
.. tab:: Manually

The output should be similar to the following one:

::

/¯¯\
/¯¯\__/¯¯\ Cilium: OK
\__/¯¯\__/ Operator: OK
/¯¯\__/¯¯\ Hubble: disabled
\__/¯¯\__/ ClusterMesh: disabled
\__/

DaemonSet cilium Desired: 2, Ready: 2/2, Available: 2/2
Deployment cilium-operator Desired: 2, Ready: 2/2, Available: 2/2
Containers: cilium-operator Running: 2
cilium Running: 2
Image versions cilium quay.io/cilium/cilium:v1.9.5: 2
cilium-operator quay.io/cilium/operator-generic:v1.9.5: 2

.. include:: k8s-install-connectivity-test.rst
.. include:: kubectl-status.rst
.. include:: kubectl-connectivity-test.rst
1 change: 1 addition & 0 deletions Documentation/gettingstarted/kind.rst
Original file line number Diff line number Diff line change
Expand Up @@ -60,6 +60,7 @@ Then, install Cilium release via Helm:
to be disabled (e.g. by setting the kernel ``cgroup_no_v1="all"`` parameter).

.. include:: k8s-install-validate.rst

.. include:: next-steps.rst

Troubleshooting
Expand Down
2 changes: 1 addition & 1 deletion Documentation/gettingstarted/kube-router.rst
Original file line number Diff line number Diff line change
Expand Up @@ -140,4 +140,4 @@ installed:
* ``10.2.2.0/24 dev tun-172011760 proto 17 src 172.0.50.227``
* ``10.2.3.0/24 dev tun-1720186231 proto 17 src 172.0.50.227``

.. include:: k8s-install-connectivity-test.rst
.. include:: k8s-install-validate.rst
49 changes: 49 additions & 0 deletions Documentation/gettingstarted/kubectl-connectivity-test.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
You can deploy the "connectivity-check" to test connectivity between pods. It is
recommended to create a separate namespace for this.

.. code-block:: shell-session
kubectl create ns cilium-test
Deploy the check with:

.. parsed-literal::
kubectl apply -n cilium-test -f \ |SCM_WEB|\/examples/kubernetes/connectivity-check/connectivity-check.yaml
It will deploy a series of deployments which will use various connectivity
paths to connect to each other. Connectivity paths include with and without
service load-balancing and various network policy combinations. The pod name
indicates the connectivity variant and the readiness and liveness gate
indicates success or failure of the test:

.. code-block:: shell-session
$ kubectl get pods -n cilium-test
NAME READY STATUS RESTARTS AGE
echo-a-76c5d9bd76-q8d99 1/1 Running 0 66s
echo-b-795c4b4f76-9wrrx 1/1 Running 0 66s
echo-b-host-6b7fc94b7c-xtsff 1/1 Running 0 66s
host-to-b-multi-node-clusterip-85476cd779-bpg4b 1/1 Running 0 66s
host-to-b-multi-node-headless-dc6c44cb5-8jdz8 1/1 Running 0 65s
pod-to-a-79546bc469-rl2qq 1/1 Running 0 66s
pod-to-a-allowed-cnp-58b7f7fb8f-lkq7p 1/1 Running 0 66s
pod-to-a-denied-cnp-6967cb6f7f-7h9fn 1/1 Running 0 66s
pod-to-b-intra-node-nodeport-9b487cf89-6ptrt 1/1 Running 0 65s
pod-to-b-multi-node-clusterip-7db5dfdcf7-jkjpw 1/1 Running 0 66s
pod-to-b-multi-node-headless-7d44b85d69-mtscc 1/1 Running 0 66s
pod-to-b-multi-node-nodeport-7ffc76db7c-rrw82 1/1 Running 0 65s
pod-to-external-1111-d56f47579-d79dz 1/1 Running 0 66s
pod-to-external-fqdn-allow-google-cnp-78986f4bcf-btjn7 1/1 Running 0 66s
.. note::

If you deploy the connectivity check to a single node cluster, pods that check multi-node
functionalities will remain in the ``Pending`` state. This is expected since these pods
need at least 2 nodes to be scheduled successfully.

Once done with the test, remove the ``cilium-test`` namespace:

.. code-block:: shell-session
kubectl delete ns cilium-test
19 changes: 19 additions & 0 deletions Documentation/gettingstarted/kubectl-status.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
You can monitor as Cilium and all required components are being installed:

.. code-block:: shell-session
$ kubectl -n kube-system get pods --watch
NAME READY STATUS RESTARTS AGE
cilium-operator-cb4578bc5-q52qk 0/1 Pending 0 8s
cilium-s8w5m 0/1 PodInitializing 0 7s
coredns-86c58d9df4-4g7dd 0/1 ContainerCreating 0 8m57s
coredns-86c58d9df4-4l6b2 0/1 ContainerCreating 0 8m57s
It may take a couple of minutes for all components to come up:

.. code-block:: shell-session
cilium-operator-cb4578bc5-q52qk 1/1 Running 0 4m13s
cilium-s8w5m 1/1 Running 0 4m12s
coredns-86c58d9df4-4g7dd 1/1 Running 0 13m
coredns-86c58d9df4-4l6b2 1/1 Running 0 13m
2 changes: 1 addition & 1 deletion Documentation/operations/troubleshooting.rst
Original file line number Diff line number Diff line change
Expand Up @@ -663,7 +663,7 @@ Cluster Mesh Troubleshooting
Install the Cilium CLI
----------------------

.. include:: ../gettingstarted/install-cli.rst
.. include:: ../gettingstarted/cli-download.rst

Generic
-------
Expand Down

0 comments on commit e304b0f

Please sign in to comment.