Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v1.7 backports 2020-09-03 #13065

Merged
merged 5 commits into from Sep 4, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
79 changes: 35 additions & 44 deletions Documentation/gettingstarted/k8s-install-kops.rst
Expand Up @@ -11,12 +11,16 @@
Installation using Kops
***********************

As of ``kops`` 1.9 release, Cilium can be plugged into ``kops``-deployed
As of kops 1.9 release, Cilium can be plugged into kops-deployed
clusters as the CNI plugin. This guide provides steps to create a Kubernetes
cluster on AWS using ``kops`` and Cilium as the CNI plugin. Note, the ``kops``
cluster on AWS using kops and Cilium as the CNI plugin. Note, the kops
deployment will automate several deployment features in AWS by default,
including AutoScaling, Volumes, VPCs, etc.

Kops offers several out-of-the-box configurations of Cilium including :ref:`kubeproxy-free`,
:ref:`ipam_eni`, and dedicated etcd cluster for Cilium. This guide will just go through a basic setup.


Prerequisites
=============

Expand Down Expand Up @@ -53,7 +57,7 @@ Setting up IAM Group and User
=============================

Assuming you have all the prerequisites, run the following commands to create
the ``kops`` user and group:
the kops user and group:

.. code:: bash

Expand All @@ -69,7 +73,7 @@ the ``kops`` user and group:
aws iam create-access-key --user-name kops


``kops`` requires the creation of a dedicated S3 bucket in order to store the
kops requires the creation of a dedicated S3 bucket in order to store the
state and representation of the cluster. You will need to change the bucket
name and provide your unique bucket name (for example a reverse of FQDN added
with short description of the cluster). Also make sure to use the region where
Expand All @@ -82,47 +86,37 @@ you will be deploying the cluster.

The above steps are sufficient for getting a working cluster installed. Please
consult `kops aws documentation
<https://github.com/kubernetes/kops/blob/master/docs/aws.md>`_ for more
<https://kops.sigs.k8s.io/getting_started/install/>`_ for more
detailed setup instructions.


Cilium Prerequisites
====================

* Ensure the :ref:`admin_system_reqs` are met, particularly the Linux kernel
and key-value store versions.

In this guide, we will use etcd version 3.1.11 and the latest CoreOS stable
image which satisfies the minimum kernel version requirement of Cilium. To get
the latest CoreOS ``ami`` image, you can change the region value to your choice
in the command below.

.. code:: bash

aws ec2 describe-images --region=us-west-2 --owner=595879546273 --filters "Name=virtualization-type,Values=hvm" "Name=name,Values=CoreOS-stable*" --query 'sort_by(Images,&CreationDate)[-1].{id:ImageLocation}'

.. code:: json

{
"id": "595879546273/CoreOS-stable-1745.5.0-hvm"
}
The default AMI satisfies the minimum kernel version required by Cilium, which is
what we will use in this guide.


Creating a Cluster
====================
==================

* Note that you will need to specify the ``--master-zones`` and ``--zones`` for
creating the master and worker nodes. The number of master zones should be
* odd (1, 3, ...) for HA. For simplicity, you can just use 1 region.
* The cluster ``NAME`` variable should end with ``k8s.local`` to use the gossip
protocol. If creating multiple clusters using the same kops user, then make
the cluster name unique by adding a prefix such as ``com-company-emailid-``.
* To keep things simple when following this guide, we will use a gossip-based cluster.
This means you do not have to create a hosted zone upfront. cluster ``NAME`` variable
must end with ``k8s.local`` to use the gossip protocol. If creating multiple clusters
using the same kops user, then make the cluster name unique by adding a prefix such as
``com-company-emailid-``.


.. code:: bash

export NAME=com-company-emailid-cilium.k8s.local
export KOPS_FEATURE_FLAGS=SpecOverrideFlag
kops create cluster --state=${KOPS_STATE_STORE} --node-count 3 --node-size t2.medium --master-size t2.medium --topology private --master-zones us-west-2a,us-west-2b,us-west-2c --zones us-west-2a,us-west-2b,us-west-2c --image 595879546273/CoreOS-stable-1745.5.0-hvm --networking cilium --override "cluster.spec.etcdClusters[*].version=3.1.11" --kubernetes-version 1.10.3 --cloud-labels "Team=Dev,Owner=Admin" ${NAME}
kops create cluster --state=${KOPS_STATE_STORE} --node-count 3 --topology private --master-zones us-west-2a,us-west-2b,us-west-2c --zones us-west-2a,us-west-2b,us-west-2c --networking cilium --cloud-labels "Team=Dev,Owner=Admin" ${NAME} --yes


You may be prompted to create a ssh public-private key pair.
Expand All @@ -134,43 +128,40 @@ You may be prompted to create a ssh public-private key pair.

(Please see :ref:`appendix_kops`)

Testing Cilium
==============

Follow the `Cilium getting started guide example
<http://cilium.readthedocs.io/en/latest/gettingstarted/minikube/#step-2-deploy-the-demo-application>`_
to test that the cluster is setup properly and that Cilium CNI and security
policies are functional.
.. include:: k8s-install-connectivity-test.rst

.. _appendix_kops:


Deleting a Cluster
===========================
==================

To undo the dependencies and other deployment features in AWS from the ``kops``
cluster creation, use ``kops`` to destroy a cluster *immediately* with the
To undo the dependencies and other deployment features in AWS from the kops
cluster creation, use kops to destroy a cluster *immediately* with the
parameter ``--yes``:

.. code:: bash

kops delete cluster ${NAME} --yes


Further reading on using Cilium with Kops
=========================================
* See the `kops networking documentation <https://kops.sigs.k8s.io/networking/cilium/>`_ for more information on the
configuration options kops offers.
* See the `kops cluster spec documentation <https://pkg.go.dev/k8s.io/kops/pkg/apis/kops?tab=doc#CiliumNetworkingSpec>`_ for a comprehensive list of all the options


Appendix: Details of kops flags used in cluster creation
========================================================

The following section explains all the flags used in create cluster command.

* ``KOPS_FEATURE_FLAGS=SpecOverrideFlag`` : This flag is used to override the etcd version to be used from 2.X[kops default ] to 3.1.x [requirement of cilium]
* ``--state=${KOPS_STATE_STORE}`` : KOPS uses an S3 bucket to store the state of your cluster and representation of your cluster
* ``--node-count 3`` : No. of worker nodes in the kubernetes cluster.
* ``--node-size t2.medium`` : The size of the AWS EC2 instance for worker nodes
* ``--master-size t2.medium`` : The size of the AWS EC2 instance of master nodes
* ``--topology private`` : Cluster will be created with private topology, what that means is all masters/nodes will be launched in a private subnet in the VPC
* ``--master-zones eu-west-1a,eu-west-1b,eu-west-1c`` : The 3 zones ensure the HA of master nodes, each belonging in a different Availability zones.
* ``--zones eu-west-1a,eu-west-1b,eu-west-1c`` : Zones where the worker nodes will be deployed
* ``--image 595879546273/CoreOS-stable-1745.3.1-hvm`` : Image name to be deployed (Cilium requires kernel version 4.8 and above so ensure to use the right OS for workers.)
* ``--networking cilium`` : Networking CNI plugin to be used - cilium
* ``--override "cluster.spec.etcdClusters[*].version=3.1.11"`` : Overrides the etcd version to be used.
* ``--kubernetes-version 1.10.3`` : Kubernetes version that is to be installed. Please note [Kops 1.9 officially supports k8s version 1.9]
* ``--cloud-labels "Team=Dev,Owner=Admin"`` : Labels for your cluster
* ``${NAME}`` : Name of the cluster. Make sure the name ends with k8s.local for a gossip based cluster
* ``--networking cilium`` : Networking CNI plugin to be used - cilium. You can also use ``cilium-etcd``, which will use a dedicated etcd cluster as key/value store instead of CRDs.
* ``--cloud-labels "Team=Dev,Owner=Admin"`` : Labels for your cluster that will be applied to your instances
* ``${NAME}`` : Name of the cluster. Make sure the name ends with k8s.local for a gossip based cluster
8 changes: 1 addition & 7 deletions Documentation/gettingstarted/k8s-install-kubespray.rst
Expand Up @@ -39,7 +39,6 @@ Infrastructure Provisioning

We will use Terraform for provisioning AWS infrastructure.

-------------------------
Configure AWS credentials
-------------------------

Expand All @@ -52,7 +51,6 @@ Export the variables for your AWS credentials
export AWS_SSH_KEY_NAME="yyy"
export AWS_DEFAULT_REGION="zzz"

-----------------------------
Configure Terraform Variables
-----------------------------

Expand Down Expand Up @@ -108,7 +106,6 @@ Example ``terraform.tfvars`` file:
kube_insecure_apiserver_address = "0.0.0.0"


-----------------------
Apply the configuration
-----------------------

Expand Down Expand Up @@ -171,10 +168,7 @@ Execute the commands below from the bastion host. If ``kubectl`` isn't installed

You should see that nodes are in ``Ready`` state and Cilium pods are in ``Running`` state

Demo Application
================

Follow this `link <https://cilium.readthedocs.io/en/stable/gettingstarted/minikube/#step-2-deploy-the-demo-application>`__ to deploy a demo application and verify the correctness of the installation.
.. include:: k8s-install-connectivity-test.rst

Delete Cluster
==============
Expand Down
11 changes: 11 additions & 0 deletions test/helpers/wrappers.go
Expand Up @@ -73,6 +73,17 @@ func CurlFail(endpoint string, optionalValues ...interface{}) string {
CurlConnectTimeout, CurlMaxTimeout, endpoint, statsInfo)
}

// CurlFailNoStats does the same as CurlFail() except that it does not print
// the stats info.
func CurlFailNoStats(endpoint string, optionalValues ...interface{}) string {
if len(optionalValues) > 0 {
endpoint = fmt.Sprintf(endpoint, optionalValues...)
}
return fmt.Sprintf(
`curl --path-as-is -s -D /dev/stderr --fail --connect-timeout %[1]d --max-time %[2]d %[3]s`,
CurlConnectTimeout, CurlMaxTimeout, endpoint)
}

// CurlWithHTTPCode retunrs the string representation of the curl command which
// only outputs the HTTP code returned by its execution against the specified
// endpoint. It takes a variadic optinalValues argument. This is passed on to
Expand Down
48 changes: 18 additions & 30 deletions test/k8sT/Policies.go
Expand Up @@ -16,6 +16,7 @@ package k8sTest

import (
"context"
"encoding/json"
"fmt"
"io/ioutil"
"regexp"
Expand Down Expand Up @@ -1125,7 +1126,6 @@ var _ = Describe("K8sPolicyTest", func() {
Context("GuestBook Examples", func() {
var (
deployment = "guestbook_deployment.yaml"
groupLabel = "zgroup=guestbook"
redisPolicy = "guestbook-policy-redis.json"
redisPolicyName = "guestbook-policy-redis"
redisPolicyDeprecated = "guestbook-policy-redis-deprecated.json"
Expand Down Expand Up @@ -1180,20 +1180,17 @@ var _ = Describe("K8sPolicyTest", func() {
})

waitforPods := func() {
err := kubectl.WaitforPods(helpers.DefaultNamespace, "-l tier=backend", helpers.HelperTimeout)
ExpectWithOffset(1, err).Should(BeNil(), "Backend pods are not ready after timeout")

err = kubectl.WaitforPods(
helpers.DefaultNamespace,
fmt.Sprintf("-l %s", groupLabel), helpers.HelperTimeout)
ExpectWithOffset(1, err).Should(BeNil(), "Bookinfo pods are not ready after timeout")
err = kubectl.WaitforPods(helpers.DefaultNamespace, "-l tier=frontend", helpers.HelperTimeout)
ExpectWithOffset(1, err).Should(BeNil(), "Frontend pods are not ready after timeout")

err := kubectl.WaitForServiceEndpoints(
helpers.DefaultNamespace, "", "redis-master", helpers.HelperTimeout)
Expect(err).Should(BeNil(), "error waiting for redis-master service to be ready")

err = kubectl.WaitForServiceEndpoints(
helpers.DefaultNamespace, "", "redis-slave", helpers.HelperTimeout)
Expect(err).Should(BeNil(), "error waiting for redis-slave service to be ready")
err = kubectl.WaitForServiceEndpoints(helpers.DefaultNamespace, "", "redis-master", helpers.HelperTimeout)
ExpectWithOffset(1, err).Should(BeNil(), "error waiting for redis-master service to be ready")

err = kubectl.WaitForServiceEndpoints(helpers.DefaultNamespace, "", "redis-follower", helpers.HelperTimeout)
ExpectWithOffset(1, err).Should(BeNil(), "error waiting for redis-follower service to be ready")
}

policyCheckStatus := func(policyCheck string) {
Expand All @@ -1204,27 +1201,18 @@ var _ = Describe("K8sPolicyTest", func() {
}

testConnectivitytoRedis := func() {
webPods, err := kubectl.GetPodsNodes(helpers.DefaultNamespace, "-l k8s-app.guestbook=web")
Expect(err).To(BeNil(), "Cannot get web pods")

serviceIP, port, err := kubectl.GetServiceHostPort(helpers.DefaultNamespace, "redis-master")
Expect(err).To(BeNil(), "Cannot get hostPort of redis-master")

serviceName := "redis-master"
err = kubectl.WaitForKubeDNSEntry(serviceName, helpers.DefaultNamespace)
Expect(err).To(BeNil(), "DNS entry is not ready after timeout")
webPods, err := kubectl.GetPodsNodes(helpers.DefaultNamespace, "-l app=guestbook")
ExpectWithOffset(1, err).To(BeNil(), "Error retrieving web pods")
ExpectWithOffset(1, webPods).ShouldNot(BeEmpty(), "Cannot retrieve web pods")

cmd := helpers.CurlFailNoStats(`"127.0.0.1/guestbook.php?cmd=set&key=messages&value=Hello"`)
for pod := range webPods {
res := kubectl.ExecPodCmd(helpers.DefaultNamespace, pod, cmd)
ExpectWithOffset(1, res).Should(helpers.CMDSuccess(), "Cannot curl webhook frontend of pod %q", pod)

redisMetadata := map[string]int{serviceIP: port, serviceName: port}
for k, v := range redisMetadata {
command := fmt.Sprintf(`nc %s %d <<EOF
PING
EOF`, k, v)
res := kubectl.ExecPodCmd(helpers.DefaultNamespace, pod, command)
ExpectWithOffset(1, res).To(helpers.CMDSuccess(),
"Web pod %q cannot connect to redis-master on '%s:%d'", pod, k, v)
}
var response map[string]interface{}
err := json.Unmarshal([]byte(res.GetStdOut()), &response)
ExpectWithOffset(1, err).To(BeNil(), fmt.Sprintf("Error parsing JSON response: %s", res.GetStdOut()))
}
}
It("checks policy example", func() {
Expand Down