Skip to content
41 changes: 24 additions & 17 deletions docs/eventing/samples/kafka/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
The following examples will help you understand how to use the different Apache Kafka components for Knative.
The following examples will help you understand how to use the different Apache
Kafka components for Knative.

## Prerequisites

Expand All @@ -11,7 +12,8 @@ All examples require:

### Setting up Apache Kafka

If you want to run the Apache Kafka cluster on Kubernetes, the simplest option is to install it by using [Strimzi](https://strimzi.io).
If you want to run the Apache Kafka cluster on Kubernetes, the simplest option
is to install it by using [Strimzi](https://strimzi.io).

1. Create a namespace for your Apache Kafka installation, like `kafka`:
```shell
Expand All @@ -23,7 +25,7 @@ If you want to run the Apache Kafka cluster on Kubernetes, the simplest option i
| sed 's/namespace: .*/namespace: kafka/' \
| kubectl -n kafka apply -f -
```
1. Describe the size of your Apache Kafka installation, like:
1. Describe the size of your Apache Kafka installation in `kafka.yaml`, like:
```yaml
apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
Expand Down Expand Up @@ -56,33 +58,38 @@ If you want to run the Apache Kafka cluster on Kubernetes, the simplest option i
$ kubectl apply -n kafka -f kafka.yaml
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They have to first download this file? Wonder if it would be better to have this use the here document? Or, maybe add a link to this file?
Also, the step says this will create a namespace, but it actually installs kafka?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the previous step which displays the yaml, i have added "Describe the size of your Apache Kafka installation in kafka.yaml", in the next step we apply this yaml.

```

This will install a small, non-production, cluster of Apache Kafka. To verify your installation,
check if the pods for Strimzi are all up, in the `kafka` namespace:
This will install a small, non-production, cluster of Apache Kafka. To verify
your installation, check if the pods for Strimzi are all up, in the `kafka`
namespace:

```shell
$ kubectl get pods -n kafka
NAME READY STATUS RESTARTS AGE
my-cluster-entity-operator-65995cf856-ld2zp 3/3 Running 0 102s
my-cluster-kafka-0 2/2 Running 0 2m8s
my-cluster-zookeeper-0 2/2 Running 0 2m39s
my-cluster-zookeeper-1 2/2 Running 0 2m49s
my-cluster-zookeeper-2 2/2 Running 0 2m59s
strimzi-cluster-operator-77555d4b69-sbrt4 1/1 Running 0 3m14s
```
```shell
$ kubectl get pods -n kafka
NAME READY STATUS RESTARTS AGE
my-cluster-entity-operator-65995cf856-ld2zp 3/3 Running 0 102s
my-cluster-kafka-0 2/2 Running 0 2m8s
my-cluster-zookeeper-0 2/2 Running 0 2m39s
my-cluster-zookeeper-1 2/2 Running 0 2m49s
my-cluster-zookeeper-2 2/2 Running 0 2m59s
strimzi-cluster-operator-77555d4b69-sbrt4 1/1 Running 0 3m14s
```

> NOTE: For production ready installs check [Strimzi](https://strimzi.io).

### Installation script

If you want to install the latest version of Strimzi, in just one step, we have a [script](./kafka_setup.sh) for your convenience, which does exactly the same steps that are listed above:
If you want to install the latest version of Strimzi, in just one step, we have
a [script](./kafka_setup.sh) for your convenience, which does exactly the same
steps that are listed above:

```shell
$ ./kafka_setup.sh
```

## Examples of Apache Kafka and Knative

A number of different examples, showing the `KafkaSource` and the `KafkaChannel` can be found here:
A number of different examples, showing the `KafkaSource`, `KafkaChannel` and
`KafkaBinding` can be found here:

- [`KafkaSource` to `Service`](./source/README.md)
- [`KafkaChannel` and Broker](./channel/README.md)
- [`KafkaBinding`](./binding/README.md)
264 changes: 264 additions & 0 deletions docs/eventing/samples/kafka/binding/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,264 @@
KafkaBinding is responsible for injecting Kafka bootstrap connection information
into a Kubernetes resource that embed a PodSpec (as `spec.template.spec`). This
enables easy bootstrapping of a Kafka client.

## Create a Job that uses KafkaBinding

In the below example a Kubernetes Job will be using the KafkaBinding to produce
messages on a Kafka Topic, which will be received by the Event Display service
via Kafka Source

### Prerequisites

1. You must ensure that you meet the
[prerequisites listed in the Apache Kafka overview](../README.md).
2. This feature is available from Knative Eventing 0.15+

### Creating a `KafkaSource` source CRD

1. Install the `KafkaSource` sub-component to your Knative cluster:

```
kubectl apply -f https://storage.googleapis.com/knative-releases/eventing-contrib/latest/kafka-source.yaml

```

1. Check that the `kafka-controller-manager-0` pod is running.
```
kubectl get pods --namespace knative-sources
NAME READY STATUS RESTARTS AGE
kafka-controller-manager-0 1/1 Running 0 42m
```

### Create the Event Display service

1. (Optional) Source code for Event Display service

Get the source code of Event Display container image from
[here](https://github.com/knative/eventing-contrib/blob/master/cmd/event_display/main.go)

1. Deploy the Event Display Service via kubectl:

```yaml
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: event-display
spec:
template:
spec:
containers:
- image: gcr.io/knative-releases/github.com/knative/eventing-contrib/cmd/event_display
```

```
$ kubectl apply --filename event-display.yaml
...
service.serving.knative.dev/event-display created
```

1. (Optional) Deploy the Event Display Service via kn cli:

Alternatively, you can create the knative service using the `kn` cli like
below

```
kn service create event-display --image=gcr.io/knative-releases/github.com/knative/eventing-contrib/cmd/event_display
```

1. Ensure that the Service pod is running. The pod name will be prefixed with
`event-display`.
```
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
event-display-00001-deployment-5d5df6c7-gv2j4 2/2 Running 0 72s
...
```

### Apache Kafka Event Source

1. Modify `event-source.yaml` accordingly with bootstrap servers, topics,
etc...:

```yaml
apiVersion: sources.knative.dev/v1alpha1
kind: KafkaSource
metadata:
name: kafka-source
spec:
consumerGroup: knative-group
bootstrapServers:
- my-cluster-kafka-bootstrap.kafka:9092 #note the kafka namespace
topics:
- logs
sink:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: event-display
```

1. Deploy the event source.
```
$ kubectl apply -f event-source.yaml
...
kafkasource.sources.knative.dev/kafka-source created
```
1. Check that the event source pod is running. The pod name will be prefixed
with `kafka-source`.
```
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
kafka-source-xlnhq-5544766765-dnl5s 1/1 Running 0 40m
```

### Kafka Binding Resource

Create the KafkaBinding that will inject kafka bootstrap information into select
`Jobs`:

1. Modify `kafka-binding.yaml` accordingly with bootstrap servers etc...:

```yaml
apiVersion: bindings.knative.dev/v1alpha1
kind: KafkaBinding
metadata:
name: kafka-binding-test
spec:
subject:
apiVersion: batch/v1
kind: Job
selector:
matchLabels:
kafka.topic: "logs"
bootstrapServers:
- my-cluster-kafka-bootstrap.kafka:9092
```

In this case, we will bind any `Job` with the labels `kafka.topic: "logs"`.

### Create Kubernetes Job

1. Source code for kafka-publisher service

Get the source code of kafka-publisher container image from
[here](https://github.com/knative/eventing-contrib/blob/master/test/test_images/kafka-publisher/main.go)

1. Now we will use the kafka-publisher container to send events to kafka topic
when the Job runs.

```yaml
apiVersion: batch/v1
kind: Job
metadata:
labels:
kafka.topic: "logs"
name: kafka-publisher-job
namespace: test-alpha
spec:
backoffLimit: 1
completions: 1
parallelism: 1
template:
metadata:
annotations:
sidecar.istio.io/inject: "false"
spec:
restartPolicy: Never
containers:
- image: docker.io/murugappans/kafka-publisher-1974f83e2ff7c8994707b5e8731528e8@sha256:fd79490514053c643617dc72a43097251fed139c966fd5d131134a0e424882de
env:
- name: KAFKA_TOPIC
value: "logs"
- name: KAFKA_KEY
value: "0"
- name: KAFKA_HEADERS
value: "content-type:application/json"
- name: KAFKA_VALUE
value: '{"msg":"This is a test!"}'
name: kafka-publisher
```

### Verify

1. Ensure the Event Display received the message sent to it by the Event Source.

```
$ kubectl logs --selector='serving.knative.dev/service=event-display' -c user-container

☁️ cloudevents.Event
Validation: valid
Context Attributes,
specversion: 1.0
type: dev.knative.kafka.event
source: /apis/v1/namespaces/default/kafkasources/kafka-source#logs
subject: partition:0#1
id: partition:0/offset:1
time: 2020-05-17T19:45:02.7Z
datacontenttype: application/json
Extensions,
kafkaheadercontenttype: application/json
key: 0
traceparent: 00-f383b779f512358b24ffbf6556a6d6da-cacdbe78ef9b5ad3-00
Data,
{
"msg": "This is a test!"
}

```

## Connecting to a TLS enabled Kafka broker

The KafkaBinding supports TLS and SASL authentication methods. For injecting TLS
authentication, please have the below files

- CA Certificate
- Client Certificate and Key

These files are expected to be in pem format, if it is in other format like jks
, please convert to pem.

1. Create the certificate files as secrets in the namespace where KafkaBinding
is going to be set up

```
$ kubectl create secret generic cacert --from-file=caroot.pem
secret/cacert created

$ kubectl create secret tls kafka-secret --cert=certificate.pem --key=key.pem
secret/key created

```

2. Apply the kafkabinding-tls.yaml, change bootstrapServers accordingly.
```yaml
apiVersion: sources.knative.dev/v1alpha1
kind: KafkaBinding
metadata:
name: kafka-source-with-tls
spec:
subject:
apiVersion: batch/v1
kind: Job
selector:
matchLabels:
kafka.topic: "logs"
net:
tls:
enable: true
cert:
secretKeyRef:
key: tls.crt
name: kafka-secret
key:
secretKeyRef:
key: tls.key
name: kafka-secret
caCert:
secretKeyRef:
key: caroot.pem
name: cacert
consumerGroup: knative-group
bootstrapServers:
- my-secure-kafka-bootstrap.kafka:443
```
Loading