Skip to content

Commit

Permalink
adding notes to MD files
Browse files Browse the repository at this point in the history
  • Loading branch information
scotty-c committed Mar 25, 2019
1 parent 58b5e95 commit 7f56fb4
Show file tree
Hide file tree
Showing 7 changed files with 188 additions and 30 deletions.
84 changes: 84 additions & 0 deletions advance-application-routing-with-istio/script.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,84 @@
#!/bin/bash

if [[ "$OSTYPE" == "linux-gnu" ]]; then
OS="linux"
ARCH="linux-amd64"
elif [[ "$OSTYPE" == "darwin"* ]]; then
OS="osx"
ARCH="darwin-amd64"
fi

ISTIO_VERSION=1.0.4
HELM_VERSION=2.11.0

check_tiller () {
POD=$(kubectl get pods --all-namespaces|grep tiller|awk '{print $2}'|head -n 1)
kubectl get pods -n kube-system $POD -o jsonpath="Name: {.metadata.name} Status: {.status.phase}" > /dev/null 2>&1 | grep Running
}

pre_reqs () {
curl -sL "https://github.com/istio/istio/releases/download/$ISTIO_VERSION/istio-$ISTIO_VERSION-$OS.tar.gz" | tar xz
if [ ! -f /usr/local/bin/istioctl ]; then
echo "Installing istioctl binary"
chmod +x ./istio-$ISTIO_VERSION/bin/istioctl
sudo mv ./istio-$ISTIO_VERSION/bin/istioctl /usr/local/bin/istioctl
fi

if [ ! -f /usr/local/bin/helm ]; then
echo "Installing helm binary"
curl -sL "https://storage.googleapis.com/kubernetes-helm/helm-v$HELM_VERSION-$ARCH.tar.gz" | tar xz
chmod +x $ARCH/helm
sudo mv linux-amd64/helm /usr/local/bin/
fi
}

install_tiller () {
echo "Checking if tiller is running"
check_tiller
if [ $? -eq 0 ]; then
echo "Tiller is installed and running"
else
echo "Deploying tiller to the cluster"
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
EOF
helm init --service-account tiller
fi
check_tiller
while [ $? -ne 0 ]; do
echo "Waiting for tiller to be ready"
sleep 30
done

}

install () {
echo "Deplying istio"

helm install istio-$ISTIO_VERSION/install/kubernetes/helm/istio --name istio --namespace istio-system \
--set global.controlPlaneSecurityEnabled=true \
--set grafana.enabled=true \
--set tracing.enabled=true \
--set kiali.enabled=true
}

pre_reqs
install_tiller
install
9 changes: 9 additions & 0 deletions deploying-kubernetes-on-azure/code.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,15 @@
# Deploying Kubernetes on Azure

In this module we are going to install Kubernetes on Azure. The first thing we
need to do is create a resource group.

Please note you do not need to use `eastus` you can create a resource group in an region.

## Create a resource group
`az group create --name k8s --location eastus`

Next we are going to create the AKS cluster. If you are using a trial account you will need to change the machine size `v1`

## Create your cluster
```
az aks create --resource-group k8s \
Expand All @@ -13,6 +20,8 @@ az aks create --resource-group k8s \
    --node-vm-size Standard_DS2_v2
```

Next install the `kubectl` binary if you dont already have it installed. Now if you are using cloud shell you can skip this step as it is already installed.

## If you dont have the kubectl binary installed
`az aks install-cli`

Expand Down
11 changes: 11 additions & 0 deletions ingress-controller/code.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,21 @@
# Ingress controller

In this module we are going to deploy the Azure ingress control that Azure has out of the box.
This is not meant to be used for a production use as it is a single pod. To learn more about the restrictions
please read [here](https://docs.microsoft.com/en-us/azure/aks/http-application-routing/?WT.mc_id=aksworkshop-github-sccoulto)

To enable the add on

## Enable the addon
`az aks enable-addons --resource-group k8s --name k8s --addons http_application_routing`

Now the addon is going to create a public DNS name to access the ingress control rules to get that DNS name

## Get DNS name
`az aks show --resource-group k8s --name k8s --query addonProfiles.httpApplicationRouting.config.HTTPApplicationRoutingZoneName -o tsv`

Now we deploy our deployments with the ingress DNS name

## Deploying our application
```
#!/bin/bash
Expand All @@ -31,6 +41,7 @@ spec:
ports:
- containerPort: 3000
hostPort: 3000
To enable the add on
EOF
cat <<EOF | kubectl apply -f -
apiVersion: v1
Expand Down
8 changes: 8 additions & 0 deletions pods-services-deployments/code.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
# Pods, services and deployments

In this module we are going to deploy our first deployment, blow is the code to do so.

## Our Deployment
```
cat <<EOF | kubectl apply -f -
Expand All @@ -26,6 +28,12 @@ spec:
EOF
```

Now to expose the service so you can hit the application from the outside world.

## Expose our service
`kubectl expose deployment webapp-deployment --type=LoadBalancer`
`kubectl get service`


`kubectl get service` will give you the public ip address for the application. It will then be available
at `http://<Your public ip>:3000`
8 changes: 8 additions & 0 deletions rbac-roles-service-accounts/cleanup.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
#!/bin/bash

set -ex

kubectl delete -n webapp-namespace deployments.apps webapp-deployment
kubectl delete -n webapp-namespace rolebindings.rbac.authorization.k8s.io webapp-role-binding
kubectl delete -n webapp-namespace roles.rbac.authorization.k8s.io webapp-role
kubectl delete -n webapp-namespace serviceaccounts webapp-service-account
87 changes: 57 additions & 30 deletions rbac-roles-service-accounts/code.md
Original file line number Diff line number Diff line change
@@ -1,42 +1,56 @@
# Rbac, roles and service accounts
# Rbac roles and service accounts

## Create a namespace
## Namespaces

Lets look at the default namespaces available to us.
We do this by issuing `kubectl get namespaces`
In the last lab we deployed our deployment to the default namespace as we did not define anything.
Kubernetes will place any pods in the default namespace unless another one is specified.

For the next part of the lab we will create a namespace to use for the rest of the lab. We will do that by issuing
```
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Namespace
metadata:
  name: webapp-namespace
name: webapp-namespace
EOF
```

## Create a service account
Then if we check our namespaces again via `kubectl get namespaces` if we were successful then we should see the new namespace.

## Cluster roles, Service accounts and Role bindings

Now we have our namespace set up we are going to create a service account and give it full access to that namespace only.

We are now going to create a service account for the namespace that we created earlier.

```
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
  name: webapp-service-account
  namespace: webapp-namespace
name: webapp-service-account
namespace: webapp-namespace
EOF
```
Then we will create a role giving us full permissions to the namespace

## Create a role
```
cat <<EOF | kubectl apply -f -
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: webapp-role
  namespace: webapp-namespace
name: webapp-role
namespace: webapp-namespace
rules:
  - apiGroups: [""]
    resources: ["pods", "pods/log"]
    verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["pods", "pods/log"]
verbs: ["get", "list", "watch"]
EOF
```
Then we will create a role binding to tie it all together

## Create a role binding
```
cat <<EOF | kubectl apply -f -
kind: RoleBinding
Expand All @@ -55,10 +69,11 @@ roleRef:
EOF
```

## deployment
Now lets deploy our application into our new namespace.

```
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: webapp-deployment
Expand All @@ -67,7 +82,7 @@ spec:
selector:
matchLabels:
app: webapp
replicas: 3
replicas: 1
template:
metadata:
labels:
Expand All @@ -81,27 +96,39 @@ spec:
hostPort: 3000
EOF
```
## Get credentials
```
#!/bin/bash

set -e
Then we can check our pods simulating the privileges of the service-account we will set up our kubeconfig to only use our service account
We will first get the secret for that service account
`SECRET_NAME=$(kubectl get sa webapp-service-account --namespace webapp-namespace -o json | jq -r .secrets[].name)`

Then create a ca certificate
`kubectl get secret --namespace webapp-namespace "${SECRET_NAME}" -o json | jq -r '.data["ca.crt"]' | base64 --decode > ca.crt`

SERVICE_ACCOUNT_NAME="webapp-service-account"
NAMESPACE="webapp-namespace"
KUBECFG_FILE_NAME="admin.conf"
Then get the user token from our secret
`USER_TOKEN=$(kubectl get secret --namespace webapp-namespace "${SECRET_NAME}" -o json | jq -r '.data["token"]' | base64 --decode)`

SECRET_NAME=$(kubectl get sa "${SERVICE_ACCOUNT_NAME}" --namespace="${NAMESPACE}" -o json | jq -r .secrets[].name)
kubectl get secret --namespace="${NAMESPACE}" "${SECRET_NAME}" -o json | jq -r '.data["ca.crt"]' | base64 --decode > ca.crt
USER_TOKEN=$(kubectl get secret --namespace webapp-namespace "${SECRET_NAME}" -o json | jq -r '.data["token"]' | base64 --decode)
Now will will setup our kubeconfig file
```
context=$(kubectl config current-context)
CLUSTER_NAME=$(kubectl config get-contexts "$context" | awk '{print $3}' | tail -n 1)
ENDPOINT=$(kubectl config view -o jsonpath="{.clusters[?(@.name == \"${CLUSTER_NAME}\")].cluster.server}")
kubectl config set-cluster "${CLUSTER_NAME}" --kubeconfig="${KUBECFG_FILE_NAME}" --server="${ENDPOINT}" --certificate-authority=ca.crt --embed-certs=true
kubectl config set-credentials "${SERVICE_ACCOUNT_NAME}-${NAMESPACE}-${CLUSTER_NAME}" --kubeconfig="${KUBECFG_FILE_NAME}" --token="${USER_TOKEN}"
kubectl config set-context "${SERVICE_ACCOUNT_NAME}-${NAMESPACE}-${CLUSTER_NAME}" --kubeconfig="${KUBECFG_FILE_NAME}" --cluster="${CLUSTER_NAME}" --user="${SERVICE_ACCOUNT_NAME}-${NAMESPACE}-${CLUSTER_NAME}" --namespace="${NAMESPACE}"
kubectl config use-context "${SERVICE_ACCOUNT_NAME}-${NAMESPACE}-${CLUSTER_NAME}" --kubeconfig="${KUBECFG_FILE_NAME}"
kubectl config set-cluster "${CLUSTER_NAME}" --kubeconfig=admin.conf --server="${ENDPOINT}" --certificate-authority=ca.crt --embed-certs=true
kubectl config set-credentials "webapp-service-account-webapp-namespace-${CLUSTER_NAME}" --kubeconfig=admin.conf --token="${USER_TOKEN}"
kubectl config set-context "webapp-service-account-webapp-namespace-${CLUSTER_NAME}" --kubeconfig=admin.conf --cluster="${CLUSTER_NAME}" --user="webapp-service-account-webapp-namespace-${CLUSTER_NAME}" --namespace webapp-namespace
kubectl config use-context "webapp-service-account-webapp-namespace-${CLUSTER_NAME}" --kubeconfig="${KUBECFG_FILE_NAME}"
```
note if you want to cheat there is a shell script [here](script.sh)

We will then load the file in our terminal
`export KUBECONFIG=admin.conf`

Now let's check our permissions by seeing if we can list pods in the default namespace
`kubectl get pods`

Now let's check our namespace
`kubectl get pods --namespace=webapp-namespace`

(Check [here](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#referring-to-subjects) for more info about rbac subjects)

Now we have limited the blast radius of our application to only the namespace that it resides in.
So there will be no way that we can leak configmaps or secrets from other applications that are not in this namespace.
11 changes: 11 additions & 0 deletions statefull-sets/code.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,13 @@
# Static claims

In this module we are going to look at creating a stateful set in Kubernetes.
Stateful sets in Kubernetes attach a cloud disk to a pod. In this case we are using Azure disk.

Azure ships with two disk types for stateful sets out of the box. You can see these by issuing the command
`kubectl get sc`

The next thing we need to do is create a pvc (persistent volume claim)

## Creating a static claim
```
cat <<EOF | kubectl apply -f -
Expand All @@ -16,6 +24,9 @@ spec:
EOF
```

Once the pvc is created we can bind a pod to use it
Below we are going to mount a volume to the pod `/usr/share/nginx/html`

## Using the claim
```
cat <<EOF | kubectl apply -f -
Expand Down

0 comments on commit 7f56fb4

Please sign in to comment.