Skip to content

Commit

Permalink
Provide example to test end to end events by tag (akka#748)
Browse files Browse the repository at this point in the history
* Provide example to test end to end events by tag

IntelliJ run configurations for local and kubernetes deployments
for testing in Amazon/Google.

* WIP

* Last steps, working in minikube
  • Loading branch information
chbatey committed May 13, 2020
1 parent 72fe38f commit d35ccd9
Show file tree
Hide file tree
Showing 25 changed files with 987 additions and 2 deletions.
16 changes: 16 additions & 0 deletions .idea/runConfigurations/Main_2551_write.xml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

16 changes: 16 additions & 0 deletions .idea/runConfigurations/Main_2552_write.xml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

16 changes: 16 additions & 0 deletions .idea/runConfigurations/Main_2553_load_.xml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

16 changes: 16 additions & 0 deletions .idea/runConfigurations/Main_2554_read.xml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

37 changes: 37 additions & 0 deletions build.sbt
@@ -1,8 +1,13 @@
import com.typesafe.sbt.packager.docker._

ThisBuild / resolvers ++= {
if (System.getProperty("override.akka.version") != null) Seq("Akka Snapshots".at("https://repo.akka.io/snapshots/"))
else Seq.empty
}

// make version compatible with docker for publishing example project
ThisBuild / dynverSeparator := "-"

lazy val root = (project in file("."))
.enablePlugins(Common, ScalaUnidocPlugin)
.disablePlugins(SitePlugin)
Expand Down Expand Up @@ -43,6 +48,38 @@ lazy val cassandraBundle = (project in file("cassandra-bundle"))
target in assembly := target.value / "bundle" / "akka" / "persistence" / "cassandra" / "launcher",
assemblyJarName in assembly := "cassandra-bundle.jar")

// Used for testing events by tag in various environments
lazy val endToEndExample = (project in file("example"))
.dependsOn(core)
.settings(libraryDependencies ++= Dependencies.exampleDependencies, publish / skip := true)
.settings(
dockerBaseImage := "openjdk:8-jre-alpine",
dockerCommands :=
dockerCommands.value.flatMap {
case ExecCmd("ENTRYPOINT", args @ _*) => Seq(Cmd("ENTRYPOINT", args.mkString(" ")))
case v => Seq(v)
},
dockerExposedPorts := Seq(8080, 8558, 2552),
dockerUsername := Some("kubakka"),
dockerUpdateLatest := true,
// update if deploying to some where that can't see docker hu
//dockerRepository := Some("some-registry"),
dockerCommands ++= Seq(
Cmd("USER", "root"),
Cmd("RUN", "/sbin/apk", "add", "--no-cache", "bash", "bind-tools", "busybox-extras", "curl", "iptables"),
Cmd(
"RUN",
"/sbin/apk",
"add",
"--no-cache",
"jattach",
"--repository",
"http://dl-cdn.alpinelinux.org/alpine/edge/community/"),
Cmd("RUN", "chgrp -R 0 . && chmod -R g=u .")),
// Docker image is only for running in k8s
javaOptions in Universal ++= Seq("-J-Dconfig.resource=kubernetes.conf"))
.enablePlugins(DockerPlugin, JavaAppPackaging)

lazy val dseTest =
(project in file("dse-test"))
.dependsOn(core % "test->test")
Expand Down
101 changes: 101 additions & 0 deletions example/README.md
@@ -0,0 +1,101 @@
# End to end example

This is for testing events by tag end to end.

All events are tagged with a configurable number of tags `tag-1`, `tag-2`, etc.

Then there are N processors that each process a configured number of tags.

The write side will use processor * tags per processors as the total number of tags

There are three roles:

* write - run the persistent actors in sharding
* load - generate load to the persistent actors
* read - run the sharded daemon set to read the tagged events

The read side periodically publishes latency statistics to distributed pub sub. These are currently just logged out.

## Validation

When everything is running you should see logs on the read side with the estimated latency.
The latency is calculated from the event timestamp so if you restart without an offset the latency will look very high!


## Running locally

A cassandra cluster must be available on localhost:9042

The first node can be run with the default ports but you must provide a role and to use the local configuration e.g.:

`sbt -Dconfig.resource=local.conf -Dakka.cluster.roles.0=write run`

Each subsequent node needs its akka management and remoting port overriden. The second node should use port 2552 for remoting
and port 8552 as these are configured in bootstrap.

`sbt -Dakka.remote.artery.canonical.port=2552 -Dakka.management.http.port=8552 -Dconfig.resource=local.conf -Dakka.cluster.roles.0=write run`

Then add at least one load node:

`sbt -Dakka.remote.artery.canonical.port=2553 -Dakka.management.http.port=8553 -Dconfig.resource=local.conf -Dakka.cluster.roles.0=load run`

And finally at least one read node:

`sbt -Dakka.remote.artery.canonical.port=2554 -Dakka.management.http.port=8554 -Dconfig.resource=local.conf -Dakka.cluster.roles.0=read -Dakka.cluster.roles.1=report run`

IntelliJ run configurations are included.

## Running inside a Kubernetes Cluster

Configuration and sbt-native-packager are included for running in K8s for larger scale tests.

There are three deployments in the `kubernetes` folder. One for each role. They all join the same cluster
and use the same image. They have imagePullPolicy set to Never for minikube, remove this for a real cluster.


### Running in minikube

* Create the Cassandra cluster, this also includes a service.

`kubectl apply -f kubernetes/cassandra.yaml`

* Create the write side:

`kubectl apply -f kubernetes/write.yaml`

* Generate some load:

`kubectl apply -f kubernetes/load.yaml`

* Start the event processors, these also include logging out the aggregated latency every 10s

`kubectl apply -f kubernetes/read.yaml`

If everything is working the read side should get logs like this:

```
Read side Count: 1 Max: 100 p99: 100 p50: 100
Read side Count: 1 Max: 97 p99: 97 p50: 97
Read side Count: 3 Max: 101 p99: 101 p50: 100
```

To increase the load edit values in `common.conf` e.g. increase the load-tick duration.


### Running in a real Kubernetes cluster

#### Publish to a registry the cluster can access e.g. Dockerhub with the kubakka user

The app image must be in a registry the cluster can see.
The build.sbt uses DockerHub and the `kubakka` user. Update this if your cluster can't
access DockerHub.

To push an image to docker hub run:

`sbt docker:publish`

And remove the imagePullPolicy: Never from the deployments

#### Configure Cassandra

Update `kubernetes.conf` to point to a Cassandra cluster e.g. via a Kubernetes service.
44 changes: 44 additions & 0 deletions example/kubernetes/cassandra.yaml
@@ -0,0 +1,44 @@
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: cassandra
name: cassandra
spec:
replicas: 1
selector:
matchLabels:
app: cassandra

template:
metadata:
labels:
app: cassandra
spec:
containers:
- name: cassandra
image: cassandra:3.11
ports:
- name: native
containerPort: 9042
protocol: TCP
volumeMounts:
- mountPath: /var/lib/cassandra/
name: data
volumes:
- name: data
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: cassandra
spec:
clusterIP: None
selector:
app: cassandra
ports:
- protocol: TCP
port: 9042
targetPort: 9042

47 changes: 47 additions & 0 deletions example/kubernetes/load.yaml
@@ -0,0 +1,47 @@
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: apc-load
cluster: apc-example
name: apc-load
spec:
replicas: 1
selector:
matchLabels:
app: apc-load
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate

template:
metadata:
labels:
app: apc-load
cluster: apc-example
spec:
containers:
- name: apc-load
# remove for real clusters, useful for minikube
imagePullPolicy: Never
image: kubakka/endtoendexample:latest
ports:
- name: management
containerPort: 8558
protocol: TCP
- name: http
containerPort: 8080
protocol: TCP
env:
- name: ROLE
value: load
readinessProbe:
httpGet:
path: /ready
port: management
livenessProbe:
httpGet:
path: /alive
port: management
23 changes: 23 additions & 0 deletions example/kubernetes/rbac.yaml
@@ -0,0 +1,23 @@
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: pod-reader
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods"]
verbs: ["get", "watch", "list"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: read-pods
subjects:
# Uses the default service account.
# Consider creating a dedicated service account to run your
# Akka Cluster services and binding the role to that one.
- kind: ServiceAccount
name: default
roleRef:
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.io
47 changes: 47 additions & 0 deletions example/kubernetes/read.yaml
@@ -0,0 +1,47 @@
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: apc-read
cluster: apc-example
name: apc-read
spec:
replicas: 1
selector:
matchLabels:
app: apc-read
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate

template:
metadata:
labels:
app: apc-read
cluster: apc-example
spec:
containers:
- name: apc-read
# remove for real clusters, useful for minikube
imagePullPolicy: Never
image: kubakka/endtoendexample:latest
ports:
- name: management
containerPort: 8558
protocol: TCP
- name: http
containerPort: 8080
protocol: TCP
env:
- name: ROLE
value: read
readinessProbe:
httpGet:
path: /ready
port: management
livenessProbe:
httpGet:
path: /alive
port: management

0 comments on commit d35ccd9

Please sign in to comment.