Skip to content

Latest commit

 

History

History
339 lines (241 loc) · 11 KB

DEVELOPMENT.md

File metadata and controls

339 lines (241 loc) · 11 KB

Development

This doc explains how to setup a development environment so you can get started contributing to Knative Eventing. Also take a look at:

Getting started

  1. Create and checkout a repo fork
  2. Install a channel implementation

Once you meet these requirements, you can start the eventing-controller.

ℹ️ If you intend to use event sinks based on Knative Services as described in some of our examples, consider installing Knative Serving. A few Knative Sandbox projects also have a dependency on Serving.

Before submitting a PR, see also contribution guidelines.

Requirements

You must install these tools:

  1. go: The language Knative Eventing is developed with (version 1.15 or higher)
  2. git: For source control
  3. ko: For building and deploying container images to Kubernetes in a single command.
  4. kubectl: For managing development environments.
  5. bash v4 or higher. On macOS the default bash is too old, you can use Homebrew to install a later version. For running some automations, such as dependencies updates and code generators.

Create a cluster and a repo

  1. Set up a kubernetes cluster
    • Follow an install guide up through "Creating a Kubernetes Cluster"
    • You do not need to install Istio or Knative using the instructions in the guide. Simply create the cluster and come back here.
    • If you did install Istio/Knative following those instructions, that's fine too, you'll just redeploy over them, below.
  2. Set up a Linux Container repository for pushing images. You can use any container image registry by adjusting the authentication methods and repository paths mentioned in the sections below.

ℹ️ You'll need to be authenticated with your KO_DOCKER_REPO before pushing images. Run gcloud auth configure-docker if you are using Google Container Registry or docker login if you are using Docker Hub.

Setup your environment

To start your environment you'll need to set these environment variables (we recommend adding them to your .bashrc):

  1. GOPATH: If you don't have one, simply pick a directory and add export GOPATH=...
  2. $GOPATH/bin on PATH: This is so that tooling installed via go get will work properly.
  3. KO_DOCKER_REPO: The docker repository to which developer images should be pushed (e.g. gcr.io/[gcloud-project]).

ℹ️ If you are using Docker Hub to store your images, your KO_DOCKER_REPO variable should have the format docker.io/<username>. Currently, Docker Hub doesn't let you create subdirs under your username (e.g. <username>/knative).

.bashrc example:

export GOPATH="$HOME/go"
export PATH="${PATH}:${GOPATH}/bin"
export KO_DOCKER_REPO='gcr.io/my-gcloud-project-id'

Checkout your fork

The Go tools require that you clone the repository to the src/knative.dev/eventing directory in your GOPATH.

To check out this repository:

  1. Create your own fork of this repo
  2. Clone it to your machine:
mkdir -p ${GOPATH}/src/knative.dev
cd ${GOPATH}/src/knative.dev
git clone git@github.com:${YOUR_GITHUB_USERNAME}/eventing.git
cd eventing
git remote add upstream https://github.com/knative/eventing.git
git remote set-url --push upstream no_push

Adding the upstream remote sets you up nicely for regularly syncing your fork.

Once you reach this point you are ready to do a full build and deploy as follows.

Starting Eventing Controller

Once you've setup your development environment, stand up Knative Eventing with:

ko apply -f config/

You can see things running with:

$ kubectl -n knative-eventing get pods
NAME                                   READY     STATUS    RESTARTS   AGE
eventing-controller-59f7969778-4dt7l   1/1       Running   0          2h

You can access the Eventing Controller's logs with:

kubectl -n knative-eventing logs $(kubectl -n knative-eventing get pods -l app=eventing-controller -o name)

Install Channels

Install the In-Memory-Channel since this is the default channel.

ko apply -f config/channels/in-memory-channel/

Depending on your needs you might want to install other channel implementations.

Install Broker

Install the MT Channel Broker or any of the other Brokers available inside the config/brokers/ directory.

ko apply -f config/brokers/mt-channel-broker/

Depending on your needs you might want to install other Broker implementations.

(Optional) Install Sugar controller

If you are running full set of e2e tests, you will need to install the sugar controller.

ko apply -f config/sugar/

Iterating

As you make changes to the code-base, there are two special cases to be aware of:

These are both idempotent, and we expect that running these at HEAD to have no diffs.

Once the codegen and dependency information is correct, redeploying the controller is simply:

ko apply -f config/controller.yaml

Or you can clean it up completely and start again.

Tests

Running tests as you make changes to the code-base is pretty simple. See the test docs.

Contributing

Please check contribution guidelines.

Clean up

You can delete Knative Eventing with:

ko delete -f config/

Telemetry

To access Telemetry see:

Packet sniffing

While debugging an Eventing component, it could be useful to perform packet sniffing on a container to analyze the traffic.

Note: this debugging method should not be used in production.

In order to do packet sniffing, you need:

After you installed all these tools, change the base image ko uses to build Eventing component images changing the .ko.yaml. You need an image that has the tar tool installed, for example:

defaultBaseImage: docker.io/debian:latest

Now redeploy with ko the component you want to sniff as explained in the above paragraphs.

When the container is running, run:

kubectl sniff <POD_NAME> -n knative-eventing -o out.dump

Changing <POD_NAME> with the pod name of the component you wish to test, for example imc-dispatcher-85797b44c8-gllnx. This command will dump the tcpdump output with all the sniffed packets to out.dump. Then, you can open this file with Wireshark using:

wireshark out.dump

If you run kubectl sniff without the output file name, it will open directly Wireshark:

kubectl sniff <POD_NAME> -n knative-eventing

Debugging Knative controllers and friends locally with Intellij Idea

Telepresence can be leveraged to debug Knative controllers, webhooks and similar components.

Telepresence allows you to use your local process, IDE, debugger, etc. but Kubernetes service calls get redirected to the process on your local. Similarly the calls on the local process goes to actual services that are running in Kubernetes.

  • Install Telepresence

  • Deploy Knative Eventing on your Kubernetes cluster.

  • Install EnvFile plugin to your Intellij

  • Run following command to swap the controller with the controller that we will start later.

telepresence --namespace knative-eventing --swap-deployment eventing-controller --env-json eventing-controller-local-env.json

For debugging applications that receive traffic, such as webhooks, you also need to pass --expose parameter.

For example:

telepresence --swap-deployment kafka-controller-manager --namespace knative-eventing --env-json kafka-controller-manager.json --expose 8443

This will replace the eventing-controller deployment on the cluster with a proxy.

It will also create a eventing-controller-local-env.json file which we will use later on. Content of this envfile looks like this:

{
    "CONFIG_LOGGING_NAME": "config-logging",
    "EVENTING_WEBHOOK_PORT": "tcp://10.105.47.10:443",
    "EVENTING_WEBHOOK_PORT_443_TCP": "tcp://10.105.47.10:443",
    "EVENTING_WEBHOOK_PORT_443_TCP_ADDR": "10.105.47.10",
    ...
}

We need to pass these environment variables when we are starting our controller.

  • Create a run configuration in Intellij for cmd/controller/main.go:

Imgur

  • Use the envfile:

Imgur

Now, use the run configuration and start the local controller in debug mode. You will see that the execution will pause in your breakpoints.

  • Clean up is easy. Just kill your local controller process and then hit Ctrl+C on the terminal windows that you ran Telepresence initially. Telepresence will delete the proxy. It will also revert the deployment on the cluster back to its original state.

Notes:

  • Networking works fine, but volumes (i.e. being able to access Kubernetes volumes from local controller) are not tested
  • This method can also be used in production, but proceed with caution.