Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introduce lifecycle test environment #41

Merged
merged 1 commit into from
May 23, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
6 changes: 6 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,4 +14,10 @@ This repo contains subdirectories with examples of how to use the
A microservice application that allows users to vote for their favorite emoji,
and tracks votes received on a leaderboard. May the best emoji win.

## Lifecycle

* [`lifecycle/`](lifecycle/)

Production testing the proxy's discovery & caching.

[conduit-logo]: https://user-images.githubusercontent.com/9226/33585569-c620a100-d919-11e7-83b6-a78f6e2683ec.png
53 changes: 53 additions & 0 deletions lifecycle/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
# Conduit lifecycle test configuration

Production testing the proxy's discovery & caching.

The goal of this test suite is to run an outbound proxy for a prolonged amount
of time in a dynamically-scheduled environment in order to exercise:
- Route resource lifecyle (i.e. routes are properly evicted)
- Telemetry resource lifecycle (i.e. prometheus can run steadily for a long
time, proxy doesn't leak memory in exporter).
- Service discovery lifecycle (i.e. updates are honored correctly, doesn't get
out sync).

## Deploy

Install Conduit service mesh:

```bash
conduit install | kubectl apply -f -
conduit dashboard
```

Deploy test framework to `lifecycle` namespace:

```bash
cat lifecycle.yml | conduit inject - | kubectl apply -f -
```

## Observe

Browse to Grafana:

```bash
conduit dashboard --show grafana
```

Tail slow-cooker logs:

```bash
kubectl -n lifecycle logs -f $(
kubectl -n lifecycle get po --selector=job-name=slow-cooker -o jsonpath='{.items[*].metadata.name}'
) slow-cooker
```

Relevant Grafana dashboards to observe
- `Conduit Deployment`, for route lifecycle and service discovery lifecycle
- `Prometheus 2.0 Stats`, for telemetry resource lifecycle


## Teardown

```bash
kubectl delete ns lifecycle
```
148 changes: 148 additions & 0 deletions lifecycle/lifecycle.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,148 @@
#
# Conduit lifecycle test configuration
#
# slow_cooker ->
# HTTP 1.1 ->
# bb point-to-point ->
# gRPC ->
# bb terminus
#

kind: Namespace
apiVersion: v1
metadata:
name: lifecycle

#
# slow_cooker
#
---
apiVersion: batch/v1
kind: Job
metadata:
name: slow-cooker
namespace: lifecycle
spec:
template:
metadata:
name: slow-cooker
spec:
containers:
- image: buoyantio/slow_cooker:1.1.0
imagePullPolicy: IfNotPresent
name: slow-cooker
command:
- "/bin/bash"
args:
- "-c"
- |
sleep 30 # wait for pods to start
slow_cooker \
-qps 10 \
-concurrency 10 \
-interval 30s \
-metric-addr 0.0.0.0:9990 \
http://bb-p2p.lifecycle.svc.cluster.local:8080
ports:
- name: slow-cooker
containerPort: 9990
restartPolicy: OnFailure
---

#
# bb point-to-point
#
kind: Service
apiVersion: v1
metadata:
name: bb-p2p
namespace: lifecycle
spec:
clusterIP: None
selector:
app: bb-p2p
ports:
- name: bb-p2p-http1
port: 8080
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: bb-p2p
name: bb-p2p
namespace: lifecycle
spec:
replicas: 10
template:
metadata:
labels:
app: bb-p2p
spec:
containers:
- image: buoyantio/bb:v0.0.3
imagePullPolicy: IfNotPresent
name: bb-p2p
command:
- "/bin/bash"
args:
- "-c"
- |
exec \
/out/bb point-to-point-channel \
--grpc-downstream-server=bb-terminus.lifecycle.svc.cluster.local:9090 \
--h1-server-port=8080
ports:
- containerPort: 8080
name: bb-p2p-http1
---

#
# bb terminus
#
kind: Service
apiVersion: v1
metadata:
name: bb-terminus
namespace: lifecycle
spec:
clusterIP: None
selector:
app: bb-terminus
ports:
- name: bb-term-grpc
port: 9090
targetPort: 9090
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: bb-terminus
name: bb-terminus
namespace: lifecycle
spec:
replicas: 10
template:
metadata:
labels:
app: bb-terminus
spec:
containers:
- image: buoyantio/bb:v0.0.3
imagePullPolicy: IfNotPresent
name: bb-terminus
command:
- "/bin/bash"
args:
- "-c"
- |
exec \
/out/bb terminus \
--grpc-server-port=9090 \
--response-text=BANANA \
--terminate-after=$(shuf -i 550-650 -n1) # 10 qps * 10 concurrency * 60 seconds / 10 replicas == 600
ports:
- containerPort: 9090
name: bb-term-grpc