Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Delete entire pods in lifecycle environment #44

Merged
merged 1 commit into from
May 24, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 15 additions & 4 deletions lifecycle/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,27 +10,37 @@ of time in a dynamically-scheduled environment in order to exercise:
- Service discovery lifecycle (i.e. updates are honored correctly, doesn't get
out sync).

## First time setup

[`lifecycle.yml`](lifecycle.yml) creates a `ClusterRole`, which requires your user to have this
ability.

```bash
kubectl create clusterrolebinding cluster-admin-binding-$USER \
--clusterrole=cluster-admin --user=$(gcloud config get-value account)
```

## Deploy

Install Conduit service mesh:

```bash
conduit install | kubectl apply -f -
conduit dashboard
conduit install --conduit-namespace conduit-lifecycle | kubectl apply -f -
conduit dashboard --conduit-namespace conduit-lifecycle
```

Deploy test framework to `lifecycle` namespace:

```bash
cat lifecycle.yml | conduit inject - | kubectl apply -f -
cat lifecycle.yml | conduit inject --conduit-namespace conduit-lifecycle - | kubectl apply -f -
```

## Observe

Browse to Grafana:

```bash
conduit dashboard --show grafana
conduit dashboard --conduit-namespace conduit-lifecycle --show grafana
```

Tail slow-cooker logs:
Expand All @@ -50,4 +60,5 @@ Relevant Grafana dashboards to observe

```bash
kubectl delete ns lifecycle
kubectl delete ns conduit-lifecycle
```
96 changes: 94 additions & 2 deletions lifecycle/lifecycle.yml
Original file line number Diff line number Diff line change
Expand Up @@ -141,8 +141,100 @@ spec:
exec \
/out/bb terminus \
--grpc-server-port=9090 \
--response-text=BANANA \
--terminate-after=$(shuf -i 550-650 -n1) # 10 qps * 10 concurrency * 60 seconds / 10 replicas == 600
--response-text=BANANA
#--terminate-after=$(shuf -i 550-650 -n1) # 10 qps * 10 concurrency * 60 seconds / 10 replicas == 600
ports:
- containerPort: 9090
name: bb-term-grpc
---

#
# Redeploy via kubectl
#
kind: ServiceAccount
apiVersion: v1
metadata:
name: redeployer
namespace: lifecycle
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: lifecycle:redeployer
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["delete", "get", "list"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: lifecycle:redeployer
namespace: lifecycle
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: lifecycle:redeployer
subjects:
- kind: ServiceAccount
name: redeployer
namespace: lifecycle
---
apiVersion: v1
kind: ConfigMap
metadata:
name: redeployer
namespace: lifecycle
data:
redeployer: |-
#!/bin/sh

# give deployment time to fully roll out
sleep 60

while true; do
PODS=$(kubectl -n lifecycle get po --selector=app=bb-terminus -o jsonpath='{.items[*].metadata.name}')

SPACES=$(echo "${PODS}" | awk -F" " '{print NF-1}')
POD_COUNT=$(($SPACES+1))
echo "found ${POD_COUNT} pods"

# restart each pod every minute
SLEEP_TIME=$(( 60 / $POD_COUNT))

for POD in ${PODS}; do
kubectl -n lifecycle delete po $POD
echo "sleeping for ${SLEEP_TIME} seconds..."
sleep $SLEEP_TIME
done
done
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: redeployer
name: redeployer
namespace: lifecycle
spec:
replicas: 1
template:
metadata:
labels:
app: redeployer
spec:
serviceAccount: redeployer
containers:
- image: lachlanevenson/k8s-kubectl:v1.10.3
imagePullPolicy: IfNotPresent
name: redeployer
command:
- "/data/redeployer"
volumeMounts:
- name: redeployer
mountPath: /data
volumes:
- name: redeployer
configMap:
name: redeployer
defaultMode: 0744