Skip to content

Commit

Permalink
docs: improve the aws-cni chaining page
Browse files Browse the repository at this point in the history
Improve the AWS VPC CNI plugin chaining page. Also, make use of the
Cilium CLI to check for the status of the installation and for
performing the connectivity test.

Signed-off-by: Bruno Miguel Custódio <brunomcustodio@gmail.com>
  • Loading branch information
bmcustodio authored and jrajahalme committed May 11, 2021
1 parent 437e2bb commit 3b350ce
Show file tree
Hide file tree
Showing 5 changed files with 139 additions and 143 deletions.
135 changes: 103 additions & 32 deletions Documentation/gettingstarted/cni-chaining-aws-cni.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,42 +6,43 @@

.. _chaining_aws_cni:

*******
AWS-CNI
*******

This guide explains how to set up Cilium in combination with aws-cni. In this
hybrid mode, the aws-cni plugin is responsible for setting up the virtual
network devices as well as address allocation (IPAM) via ENI. After the initial
networking is setup, the Cilium CNI plugin is called to attach eBPF programs to
the network devices set up by aws-cni to enforce network policies, perform
load-balancing, and encryption.
******************
AWS VPC CNI plugin
******************

This guide explains how to set up Cilium in combination with the AWS VPC CNI
plugin. In this hybrid mode, the AWS VPC CNI plugin is responsible for setting
up the virtual network devices as well as for IP address management (IPAM) via
ENIs. After the initial networking is setup for a given pod, the Cilium CNI
plugin is called to attach eBPF programs to the network devices set up by the
AWS VPC CNI plugin in order to enforce network policies, perform load-balancing
and provide encryption.

.. include:: cni-chaining-limitations.rst

.. important::

Due to a bug in certain version of the AWS CNI, please ensure that you are
running the AWS CNI `1.7.9 <https://github.com/aws/amazon-vpc-cni-k8s/releases/tag/v1.7.9>`_
or newer to guarantee compatibility with Cilium.
Please ensure that you are running version `1.7.9 <https://github.com/aws/amazon-vpc-cni-k8s/releases/tag/v1.7.9>`_
or newer of the AWS VPC CNI plugin to guarantee compatibility with Cilium.
The official upgrade instructions can be found `here <https://docs.aws.amazon.com/eks/latest/userguide/cni-upgrades.html>`_.

.. image:: aws-cni-architecture.png


Setup Cluster on AWS
====================
Setting up a cluster on AWS
===========================

Follow the instructions in the :ref:`k8s_install_quick` guide to set up an EKS
cluster or use any other method of your preference to set up a Kubernetes
cluster.
cluster, or use any other method of your preference to set up a Kubernetes
cluster on AWS.

Ensure that the `aws-vpc-cni-k8s <https://github.com/aws/amazon-vpc-cni-k8s>`__
plugin is installed. If you have set up an EKS cluster, this is automatically
done.
Ensure that the `aws-vpc-cni-k8s <https://github.com/aws/amazon-vpc-cni-k8s>`_
plugin is installed — which will already be the case if you have created an EKS
cluster. Also, ensure the version of the plugin is up-to-date as per the above.

.. include:: k8s-install-download-release.rst

Deploy Cilium release via Helm:
Deploy Cilium via Helm:

.. parsed-literal::
Expand All @@ -53,23 +54,93 @@ Deploy Cilium release via Helm:
--set nodeinit.enabled=true \\
--set endpointRoutes.enabled=true
This will enable chaining with the aws-cni plugin. It will also disable
tunneling. Tunneling is not required as ENI IP addresses can be directly routed
in your VPC. You can also disable masquerading for the same reason.
This will enable chaining with the AWS VPC CNI plugin. It will also disable
tunneling, as it's not required since ENI IP addresses can be directly routed
in the VPC. For the same reason, masquerading can be disabled as well.

Restart existing pods
=====================

The new CNI chaining configuration will *not* apply to any pod that is already
running in the cluster. Existing pods will be reachable and Cilium will
load-balance to them but policy enforcement will not apply to them and
load-balancing is not performed for traffic originating from existing pods.
You must restart these pods in order to invoke the chaining configuration on
them.
The new CNI chaining configuration *will not* apply to any pod that is already
running in the cluster. Existing pods will be reachable, and Cilium will
load-balance *to* them, but not *from* them. Policy enforcement will also not
be applied. For these reasons, you must restart these pods so that the chaining
configuration can be applied to them.

The following command can be used to check which pods need to be restarted:

.. code-block:: bash
If you are unsure if a pod is managed by Cilium or not, run ``kubectl get cep``
in the respective namespace and see if the pod is listed.
for ns in $(kubectl get ns -o jsonpath='{.items[*].metadata.name}'); do
ceps=$(kubectl -n "${ns}" get cep \
-o jsonpath='{.items[*].metadata.name}')
pods=$(kubectl -n "${ns}" get pod \
-o custom-columns=NAME:.metadata.name,NETWORK:.spec.hostNetwork \
| grep -E '\s(<none>|false)' | awk '{print $1}' | tr '\n' ' ')
ncep=$(echo "${pods} ${ceps}" | tr ' ' '\n' | sort | uniq -u | paste -s -d ' ' -)
for pod in $(echo $ncep); do
echo "${ns}/${pod}";
done
done
.. include:: k8s-install-validate.rst

Advanced
========

Enabling security groups for pods (EKS)
---------------------------------------

Cilium can be used alongside the `security groups for pods <https://docs.aws.amazon.com/eks/latest/userguide/security-groups-for-pods.html>`_
feature of EKS in supported clusters when running in chaining mode. Follow the
instructions below to enable this feature:

.. important::

The following guide requires `jq <https://stedolan.github.io/jq/>`_ and the
`AWS CLI <https://aws.amazon.com/cli/>`_ to be installed and configured.

Make sure that the ``AmazonEKSVPCResourceController`` managed policy is attached
to the IAM role associated with the EKS cluster:

.. code-block:: shell-session
export EKS_CLUSTER_NAME="my-eks-cluster" # Change accordingly
export EKS_CLUSTER_ROLE_NAME=$(aws eks describe-cluster \
--name "${EKS_CLUSTER_NAME}" \
| jq -r '.cluster.roleArn' | awk -F/ '{print $NF}')
aws iam attach-role-policy \
--policy-arn arn:aws:iam::aws:policy/AmazonEKSVPCResourceController \
--role-name "${EKS_CLUSTER_ROLE_NAME}"
Then, and as mentioned above, make sure that the version of the AWS VPC CNI
plugin running in the cluster is up-to-date:

.. code-block:: shell-session
kubectl -n kube-system get ds/aws-node \
-o jsonpath='{.spec.template.spec.containers[0].image}'
602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon-k8s-cni:v1.7.10
Next, patch the ``kube-system/aws-node`` DaemonSet in order to enable security
groups for pods:

.. code-block:: shell-session
kubectl -n kube-system patch ds aws-node \
-p '{"spec":{"template":{"spec":{"initContainers":[{"env":[{"name":"DISABLE_TCP_EARLY_DEMUX","value":"true"}],"name":"aws-vpc-cni-init"}],"containers":[{"env":[{"name":"ENABLE_POD_ENI","value":"true"}],"name":"aws-node"}]}}}}'
kubectl -n kube-system rollout status ds aws-node
After the rollout is complete, all nodes in the cluster should have the ``vps.amazonaws.com/has-trunk-attached`` label set to ``true``:

.. code-block:: shell-session
kubectl get nodes -L vpc.amazonaws.com/has-trunk-attached
NAME STATUS ROLES AGE VERSION HAS-TRUNK-ATTACHED
ip-192-168-111-169.eu-west-2.compute.internal Ready <none> 22m v1.19.6-eks-49a6c0 true
ip-192-168-129-175.eu-west-2.compute.internal Ready <none> 22m v1.19.6-eks-49a6c0 true
From this moment everything should be in place. For details on how to actually
associate security groups to pods, please refer to the `official documentation <https://docs.aws.amazon.com/eks/latest/userguide/security-groups-for-pods.html>`_.

.. include:: next-steps.rst
51 changes: 14 additions & 37 deletions Documentation/gettingstarted/k8s-install-connectivity-test.rst
Original file line number Diff line number Diff line change
@@ -1,46 +1,23 @@
Deploy the connectivity test
----------------------------

You can deploy the "connectivity-check" to test connectivity between pods. It is
recommended to create a separate namespace for this.
Run the following command to validate that your cluster has proper network
connectivity:

.. code:: bash
kubectl create ns cilium-test
Deploy the check with:

.. parsed-literal::
kubectl apply -n cilium-test -f \ |SCM_WEB|\/examples/kubernetes/connectivity-check/connectivity-check.yaml
.. code-block:: shell-session
It will deploy a series of deployments which will use various connectivity
paths to connect to each other. Connectivity paths include with and without
service load-balancing and various network policy combinations. The pod name
indicates the connectivity variant and the readiness and liveness gate
indicates success or failure of the test:
cilium connectivity test
.. code-block:: shell-session
The output should be similar to the following one:

$ kubectl get pods -n cilium-test
NAME READY STATUS RESTARTS AGE
echo-a-76c5d9bd76-q8d99 1/1 Running 0 66s
echo-b-795c4b4f76-9wrrx 1/1 Running 0 66s
echo-b-host-6b7fc94b7c-xtsff 1/1 Running 0 66s
host-to-b-multi-node-clusterip-85476cd779-bpg4b 1/1 Running 0 66s
host-to-b-multi-node-headless-dc6c44cb5-8jdz8 1/1 Running 0 65s
pod-to-a-79546bc469-rl2qq 1/1 Running 0 66s
pod-to-a-allowed-cnp-58b7f7fb8f-lkq7p 1/1 Running 0 66s
pod-to-a-denied-cnp-6967cb6f7f-7h9fn 1/1 Running 0 66s
pod-to-b-intra-node-nodeport-9b487cf89-6ptrt 1/1 Running 0 65s
pod-to-b-multi-node-clusterip-7db5dfdcf7-jkjpw 1/1 Running 0 66s
pod-to-b-multi-node-headless-7d44b85d69-mtscc 1/1 Running 0 66s
pod-to-b-multi-node-nodeport-7ffc76db7c-rrw82 1/1 Running 0 65s
pod-to-external-1111-d56f47579-d79dz 1/1 Running 0 66s
pod-to-external-fqdn-allow-google-cnp-78986f4bcf-btjn7 1/1 Running 0 66s
::

.. note::
ℹ️ Monitor aggregation detected, will skip some flow validation steps
✨ [k8s-cluster] Creating namespace for connectivity check...
(...)
---------------------------------------------------------------------------------------------------------------------
📋 Test Report
---------------------------------------------------------------------------------------------------------------------
✅ 69/69 tests successful (0 warnings)

If you deploy the connectivity check to a single node cluster, pods that check multi-node
functionalities will remain in the ``Pending`` state. This is expected since these pods
need at least 2 nodes to be scheduled successfully.
Congratulations! You have a fully functional Kubernetes cluster with Cilium. 🎉
60 changes: 1 addition & 59 deletions Documentation/gettingstarted/k8s-install-default.rst
Original file line number Diff line number Diff line change
Expand Up @@ -225,64 +225,6 @@ pods are failing to be deployed.
was deployed and the installer has automatically restarted them to ensure
all pods get networking provided by Cilium.

Validate Installation
=====================

Check the Status
----------------

To validate the installation, run the ``cilium status`` command:

.. code-block:: shell-session
cilium status
/¯¯\
/¯¯\__/¯¯\ Cilium: OK
\__/¯¯\__/ Operator: OK
/¯¯\__/¯¯\ Hubble: disabled
\__/¯¯\__/ ClusterMesh: disabled
\__/
DaemonSet cilium Desired: 3, Ready: 3/3, Available: 3/3
Deployment cilium-operator Desired: 1, Ready: 1/1, Available: 1/1
Containers: cilium Running: 3
cilium-operator Running: 1
Image versions cilium quay.io/cilium/cilium:v1.9.4: 3
cilium-operator quay.io/cilium/operator-generic:v1.9.4: 1
Run the Connectivity Test
-------------------------

Run the ``cilium connectivity test`` to validate that your cluster has proper
network connectivity:

.. code-block:: shell-session
cilium connectivity test
✨ [gke_cilium-dev_us-west2-a_32287] Creating namespace for connectivity check...
[...]
---------------------------------------------------------------------------------------------------------------------
🔌 [pod-to-pod] Testing cilium-test/client-77bd7f48dd-5zwkw -> cilium-test/echo-other-node-86774f89b9-xjmkn...
---------------------------------------------------------------------------------------------------------------------
✅ [pod-to-pod] cilium-test/client-77bd7f48dd-5zwkw (10.0.2.188) -> cilium-test/echo-other-node-86774f89b9-xjmkn (10.0.1.125)
---------------------------------------------------------------------------------------------------------------------
🔌 [pod-to-pod] Testing cilium-test/client-77bd7f48dd-5zwkw -> cilium-test/echo-same-node-f789dd8f7-th9f7...
---------------------------------------------------------------------------------------------------------------------
✅ [pod-to-pod] cilium-test/client-77bd7f48dd-5zwkw (10.0.2.188) -> cilium-test/echo-same-node-f789dd8f7-th9f7 (10.0.2.223)
---------------------------------------------------------------------------------------------------------------------
🔌 [pod-to-service] Testing cilium-test/client-77bd7f48dd-5zwkw -> echo-other-node:8080 (ClusterIP)...
---------------------------------------------------------------------------------------------------------------------
✅ [pod-to-service] cilium-test/client-77bd7f48dd-5zwkw (10.0.2.188) -> echo-other-node:8080 (ClusterIP) (echo-other-node:8080)
[...]
---------------------------------------------------------------------------------------------------------------------
📋 Test Report
---------------------------------------------------------------------------------------------------------------------
✅ 9/9 tests successful (0 warnings)
Congratulations! You have a fully functional Kubernetes cluster with Cilium. 🎉
.. include:: k8s-install-validate.rst

.. include:: next-steps.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

.. note::

First, make sure you have Helm 3 `installed <https://helm.sh/docs/intro/install/>`_.
Make sure you have Helm 3 `installed <https://helm.sh/docs/intro/install/>`_.
Helm 2 is `no longer supported <https://helm.sh/blog/helm-v2-deprecation-timeline/>`_.

.. only:: stable
Expand Down
34 changes: 20 additions & 14 deletions Documentation/gettingstarted/k8s-install-validate.rst
Original file line number Diff line number Diff line change
@@ -1,24 +1,30 @@
Validate the Installation
=========================

You can monitor as Cilium and all required components are being installed:
.. include:: install-cli.rst

.. parsed-literal::
To validate that Cilium has been properly installed, you can run

kubectl -n kube-system get pods --watch
NAME READY STATUS RESTARTS AGE
cilium-operator-cb4578bc5-q52qk 0/1 Pending 0 8s
cilium-s8w5m 0/1 PodInitializing 0 7s
coredns-86c58d9df4-4g7dd 0/1 ContainerCreating 0 8m57s
coredns-86c58d9df4-4l6b2 0/1 ContainerCreating 0 8m57s
.. code-block:: shell-session
It may take a couple of minutes for all components to come up:
cilium status --wait
.. parsed-literal::
The output should be similar to the following one:

cilium-operator-cb4578bc5-q52qk 1/1 Running 0 4m13s
cilium-s8w5m 1/1 Running 0 4m12s
coredns-86c58d9df4-4g7dd 1/1 Running 0 13m
coredns-86c58d9df4-4l6b2 1/1 Running 0 13m
::

/¯¯\
/¯¯\__/¯¯\ Cilium: OK
\__/¯¯\__/ Operator: OK
/¯¯\__/¯¯\ Hubble: disabled
\__/¯¯\__/ ClusterMesh: disabled
\__/

DaemonSet cilium Desired: 2, Ready: 2/2, Available: 2/2
Deployment cilium-operator Desired: 2, Ready: 2/2, Available: 2/2
Containers: cilium-operator Running: 2
cilium Running: 2
Image versions cilium quay.io/cilium/cilium:v1.9.5: 2
cilium-operator quay.io/cilium/operator-generic:v1.9.5: 2

.. include:: k8s-install-connectivity-test.rst

0 comments on commit 3b350ce

Please sign in to comment.