Skip to content

Latest commit

 

History

History
170 lines (117 loc) · 5.42 KB

11.BrokerTriggerWithKafka.md

File metadata and controls

170 lines (117 loc) · 5.42 KB

Using Broker and Trigger with Kafka

In this example we will setup an API source that listens to kubernetes events and sinks them to a broker. We will then add a trigger that configures delivery of these events to a serverless application.

Prerequisites

  • Knative Serving and Knative Eventing are deployed on your cluster using OpenShift Serverless Operator
  • RedHat AMQ Integration Streams Operator is deployed and a kafka cluster my-cluster is deployed and running in kafka namespace
  • You have a Knative service deployed. Add a knative service running kn service create msgtxr-sl --image=image-registry.openshift-image-registry.svc:5000/kn-demo/msgtxr -l app.openshift.io/runtime=quarkus --env format=none
  • You will need cluster administrator to help with a cluster role binding

Create a Service Account with Event Watcher Role

In order for our knative API event source to watch for events, it needs appropriate permissions. We will assign these permissions to a service account and run the api source with this service account.

  • Add a Service Account named events-sa
% oc create serviceaccount events-sa

serviceaccount/events-sa created
  • Your cluster administrator should create a cluster role to watch events as below
% oc create clusterrole event-watcher --verb=get,list,watch --resource=events

clusterrole.rbac.authorization.k8s.io/event-watcher created
  • Your cluster administrator should create a cluster role binding to assign the event-watcher cluster role to the events-sa service account.
% oc adm policy add-cluster-role-to-user event-watcher -z events-sa

clusterrole.rbac.authorization.k8s.io/event-watcher added: "events-sa"

Cluster Administrator should change the default channel at broker creation to Kafka Channel

When a broker is created, we want the channel created to be a Kafka channel for our current exercise.

So the cluster administrator should change the configmap config-br-default-channel to create KafkaChannel when a broker is added to your namespace.

Cluster admin can edit the data using administration console or directly from CLI as below

Edit configmap

oc edit cm config-br-default-channel --as system:admin

Change data from

data:
  channelTemplateSpec: |
    apiVersion: messaging.knative.dev/v1alpha1
    kind: InMemoryChannel

to

data:
  channelTemplateSpec: |
    apiVersion: messaging.knative.dev/v1alpha1
    kind: KafkaChannel 

so that InMemoryChannel is replaced with KafkaChannel as default

Add a Trigger and Broker

Now let us add a trigger that sinks to our knative service. --inject-broker option will also add the broker at the same time.

kn trigger create testevents-trigger \
--inject-broker --broker default \
--sink svc:msgtxr-sl

Verify that the broker pods are running

% oc get po | grep Running   
default-broker-filter-7d89b8d949-nqfrr          1/1     Running     0          3m59s
default-broker-ingress-6b5d8cf558-jttvp         1/1     Running     0          3m58s

and check the trigger list

% kn trigger list
NAME                 BROKER    SINK            AGE   CONDITIONS   READY   REASON
testevents-trigger   default   svc:msgtxr-sl   43s   5 OK / 5     True 

You should also see a KafkaChannel added as we configured that as the default for broker.

% oc get channel 
NAME                                                     READY   REASON   URL                                                               AGE
kafkachannel.messaging.knative.dev/default-kne-trigger   True             http://default-kne-trigger-kn-channel.kn-demo.svc.cluster.local   46s

and you should see a topic named knative-messaging-kafka.kn-demo.default-kne-trigger added to the kafka cluster when you run oc get kafkatopics -n kafka

Add API event source

Now let us add API event source that watches for kubernetes events and sends them to the default broker.

kn source apiserver create testevents-kafka --resource Event:v1 --service-account events-sa --sink broker:default

Now you should see API event source pod running and as it starts sending events, you will see the serverless pod coming up in a few seconds.

% oc get po | grep Running
apiserversource-testevents-90c1477d-9c5d-40ae-91c2-9d71d3696jf6   1/1     Running     0          12s
default-broker-filter-7d89b8d949-6w4wp                            1/1     Running     0          5m23s
default-broker-ingress-6b5d8cf558-lm889                           1/1     Running     0          5m23s
msgtxr-sl-hjcsm-3-deployment-78db8688f8-h4zmw                     2/2     Running     0          7s

Once the event consumer (your knative service pod) is up, watch the logs oc logs <podname> to see the events coming in.

Conclusion

In this chapter, we have configured an API event source to send kubernetes events to a knative service via Broker and Trigger.

Cleanup

  • Remove trigger
kn trigger delete testevents-trigger
  • Remove API event source
kn source apiserver delete testevents-kafka
  • Cluster admin to relabel the namespace
oc label namespace kn-demo knative-eventing-injection=disabled  --overwrite=true    
  • Delete broker pods
oc delete brokers.eventing.knative.dev --all
  • Cluster admin to remove cluster role binding and cluster role
oc adm policy remove-cluster-role-from-user event-watcher -z events-sa
oc delete clusterrole event-watcher
  • Delete service account
oc delete sa events-sa