-
Notifications
You must be signed in to change notification settings - Fork 27
Local development with Kubernetes
This page describes how to build and deploy Corda 5 to a local Kubernetes cluster for the purposes of Corda development.
The following instructions assume that you have a single-node Kubernetes cluster running with a Docker daemon. Two options that meet these requirements and have been tested with these instructions are:
Docker Desktop provides a simpler user experience but commercial use in larger enterprises requires a paid subscription. See Do I need to pay to use Docker Desktop? for more details.
- Install Docker Desktop.
- Enable Kubernetes in Preferences.
- Configure your Kubernetes cluster with at least 6 CPU and 8 GB RAM.
- For macOS, configure the resources in the Docker Desktop Preferences.
- For Linux, configure the resources in the Docker Desktop Preferences.
- For Windows, configure the WSL settings in the .wslconfig file.
-
Install minikube.
-
Start minikube with at least 8 GB memory and 6 CPUs:
minikube start --memory 8000 --cpus 6
-
If you don't already have the
kubectl
CLI installed, set up the following alias:alias kubectl="minikube kubectl --"
-
Activate CLI completion using the appropriate command for your shell.
Bash:
# helm completion --help source <(helm completion bash)
Zsh:
# helm completion --help source <(helm completion zsh)
-
If you have multiple Kubernetes clusters, ensure that you are targeting the
kubectl
context for the correct cluster. You can list contexts you have defined with:kubectl config get-contexts
The current context is marked with an asterisk. You can switch context, for example:
kubectl config use-context docker-desktop
If you are using Docker Desktop, you can also switch context via the Kubernetes sub-menu.
-
Create a namespace to contain your Corda deployment. For example, to create a namespace called
corda
run the command:kubectl create namespace corda
The commands that follow all explicitly specify the namespace to use. However, you can reduce the length of your commands by switching the Kubernetes context to use the newly created namespace:
kubectl config set-context --current --namespace=corda
Install the kubectx and kubens tools for an easy way to switch context and namespace from the command line.
Corda requires PostgreSQL and Kafka instances as prerequisites.
One option to obtain these is via the corda-dev-prereqs
Helm chart.
Note: this Helm chart is not designed for availability or scalability and should only be used for development purposes.
The packaged Helm chart can be installed directly from Docker Hub or in source form from GitHub.
The corda-dev-prereqs
Helm chart is available packaged on Docker Hub.
-
Install the Helm chart:
helm install prereqs -n corda oci://registry-1.docker.io/corda/corda-dev-prereqs --timeout 10m --wait
The
--wait
option ensures all of the pods are ready before returning. The timeout is set to 10 minutes to allow time to pull the images from Docker Hub. The process should take significantly less time than this on subsequent installs.
The corda-dev-prereqs
Helm chart is available in source form in the corda/corda-dev-prereqs GitHub repository.
-
Clone the GitHub repository:
git clone https://github.com/corda/corda-dev-prereqs.git cd corda-dev-prereqs
-
Install the Helm chart:
helm install prereqs -n corda charts/corda-dev-prereqs --timeout 10m --wait
The
--wait
option ensures all of the pods are ready before returning. The timeout is set to 10 minutes to allow time to pull the images from Docker Hub. The process should take significantly less time than this on subsequent installs.
-
Clone the corda/corda-cli-plugin-host repository:
git clone https://github.com/corda/corda-cli-plugin-host.git
-
Clone the corda/corda-api repository:
git clone https://github.com/corda/corda-api.git
-
Clone the corda/corda-runtime-os repository:
git clone https://github.com/corda/corda-runtime-os.git
-
If you’re using minikube, configure your shell to use the Docker daemon inside minikube so that built images are available directly to the cluster:
Bash:
eval $(minikube docker-env)
PowerShell:
minikube docker-env --shell=powershell | Invoke-Expression
The next step must be run in the shell where this command is executed.
-
Using Java 17, build all of the Corda Docker images with Gradle in the
corda-runtime-os
repository:cd corda-runtime-os ./gradlew clean publishOSGiImage -PcompositeBuild=true
There is a values-prereqs.yaml
file at the root of the corda-runtime-os
repository that overrides the default values in the Corda Helm chart.
These values configure the chart to use the images you just built and specify the location of the Kafka and PostgreSQL instances created by the corda-dev-prereqs
Helm chart.
They also set the initial admin user password to admin
.
-
Build the Helm chart dependencies by running the following from the root of the
corda-runtime-os
repository:helm dependency build charts/corda
-
Install the chart as follows:
helm install corda -n corda charts/corda --values values-prereqs.yaml --wait
When the command completes, the REST endpoint should be ready to access.
If the install times out, it indicates that not all of the worker pods reached the ready state. Use the following command to list the pods and their current state:
kubectl get pods -n corda
If a particular pod is failing to start, run the following command to get more details using the name of the pod from the previous output:
kubectl describe pod -n corda corda-rest-worker-8f9f5565-wkzgq
If the pod is continually restarting, it is likely that Kubernetes is killing it because it does not reach a healthy state. Check the pod logs, for example:
kubectl logs -n corda corda-rest-worker-8f9f5565-wkzgq
For more information about these commands, see View worker logs.
To follow the logs for a specific worker pod:
kubectl logs -f -n corda corda-rest-worker-69f9dbcc97-ndllq
To retrieve a list of the pods:
kubectl get pods -n corda
To enable command completion and allow tab-completion of the pod name:
kubectl completion -h
You can also view the logs for all pods for a deployment. This has the advantage that the name does not change from one release to the next. For example:
kubectl logs -f -n corda deploy/corda-rest-worker
To get a list of all deployments:
kubectl get deployments -n corda
To follow the logs for all pods in the release, use labels:
kubectl logs -f -n corda -l app.kubernetes.io/instance=corda --prefix=true
For more power (and color), install stern.
If you are using minikube, you can use the following command to display the Kubernetes dashboard and then navigate to the logs via Namespaces > Pods > Pod logs:
minikube dashboard --url
-
To access the REST endpoint, forward the port to
localhost:8888
by running one of these commands:Bash:
kubectl port-forward -n corda deploy/corda-rest-worker 8888 &
PowerShell:
Start-Job -ScriptBlock {kubectl port-forward -n corda deploy/corda-rest-worker 8888}
NOTE: Certain tests can be disruptive and cause port forwarding to break. It is recommended to set up the port forwarding in a way that will repeatedly re-establish the port forward. This could be a crude
while(true)
loop, or something more sophisticated - see this solution for an example.
-
Retrieve the password for the initial
admin
user as follows:Bash:
kubectl get secret corda-initial-admin-user -n corda \ -o go-template='{{ .data.password | base64decode }}'
PowerShell:
kubectl get secret corda-initial-admin-user -n corda ` -o go-template='{{ .data.password | base64decode }}'
-
From the root directory of the
corda/corda-runtime-os
repository, run this Gradle task to execute the E2E tests:./gradlew :applications:workers:release:rest-worker:e2eTest
The deployment of Corda in CI for the E2E tests uses multiple Kafka users. If you need to replicate this behaviour:
-
Deploy the prerequisites with the overrides in
.ci/e2eTests/prereqs.yaml
in thecorda-runtime-os
repository. -
Run the following bash commands to copy the comma-separated list of generated passwords into separate fields in a new secret:
KAFKA_PASSWORDS=$(kubectl get secret prereqs-kafka-jaas -n "${NAMESPACE}" -o go-template="{{ index .data \\\"client-passwords\\\" | base64decode }}") IFS=',' read -r -a KAFKA_PASSWORDS_ARRAY <<< "$KAFKA_PASSWORDS" kubectl create secret generic kafka-credentials -n "${NAMESPACE}" \ --from-literal=bootstrap="${KAFKA_PASSWORDS_ARRAY[0]}" \ --from-literal=crypto="${KAFKA_PASSWORDS_ARRAY[1]}" \ --from-literal=db="${KAFKA_PASSWORDS_ARRAY[2]}" \ --from-literal=flow="${KAFKA_PASSWORDS_ARRAY[3]}" \ --from-literal=flowMapper="${KAFKA_PASSWORDS_ARRAY[4]}" \ --from-literal=verification="${KAFKA_PASSWORDS_ARRAY[5]}" \ --from-literal=membership="${KAFKA_PASSWORDS_ARRAY[6]}" \ --from-literal=p2pGateway="${KAFKA_PASSWORDS_ARRAY[7]}" \ --from-literal=p2pLinkManager="${KAFKA_PASSWORDS_ARRAY[8]}" \ --from-literal=persistence="${KAFKA_PASSWORDS_ARRAY[9]}" \ --from-literal=rest="${KAFKA_PASSWORDS_ARRAY[10]}" \ --from-literal=uniqueness="${KAFKA_PASSWORDS_ARRAY[11]}"
-
Deploy Corda with the overrides specified in
.ci\e2eTests\corda.yaml
in thecorda-runtime-os
repository.
To make a change to a single worker image, you can redeploy the worker without recreating the entire installation. For example, to rebuild the REST worker image:
-
Run this command:
./gradlew :applications:workers:release:rest-worker:publishOSGiImage -PcompositeBuild=true
-
List the pods (as described in View worker logs and then use the name of the current REST worker pod to kill it. For example:
kubectl delete pod -n corda corda-test-worker-69f9dbcc97-ndllq
When Kubernetes restarts the pod, it picks up the newly built Docker image.
This example shows how to connect the IntelliJ debugger to the corda-rest-worker
pod.
By default, debug is not enabled for any of the pods. You must also configure Corda to only create a single replica of the worker to guarantee that work is handled by the pod you are attached to.
-
There is a
debug.yaml
file in the root of thecorda-runtime-os
repository. Uncomment the lines to enable debugging for the worker you are interested in. For example:workers: rest: replicaCount: 1 debug: enabled: true
-
(Re)install the Helm chart specifying both the
values-prereqs.yaml
anddebug.yaml
as follows:Bash:
helm upgrade --install corda -n corda \ charts/corda \ --values values-prereqs.yaml \ --values debug.yaml \ --wait
PowerShell:
helm upgrade --install corda -n corda ` charts/corda ` --values values-prereqs.yaml ` --values debug.yaml ` --wait
-
Expose port 5005 from the pod to localhost:
Bash:
kubectl port-forward -n corda deploy/corda-rest-worker 5005 &
PowerShell:
Start-Job -ScriptBlock {kubectl port-forward -n corda deploy/corda-rest-worker 5005}
This command uses the name of the deployment as, unlike the pod name, it stays the same between one Helm release and the next. It does, however, just pick one pod in the deployment at random and attach the debugger to that. That is not an issue in this example as we have configured the number of replicas as 1.
-
To connect IntelliJ to the debug port:
a. Click Run > Edit Configurations.
The Run/Debug configurations window is displayed.
b. Click the plus (+) symbol and select Remote JVM Debug.
c. Enter a Name and Port Number.
d. Click OK.
Note: To permit debugging without restarting the process, startup, liveness, and readiness probes are disabled when debug is enabled.
IntelliJ users may also be interested in the Cloud Code plugin, which enables you to interact with Kubernetes without leaving your IDE.
Use the debug.yaml
file in the root of the corda-runtime-os
repository when installing the Helm chart:
Bash:
helm upgrade --install corda -n corda \
charts/corda \
--values values-prereqs.yaml \
--values debug.yaml \
--wait
PowerShell:
helm upgrade --install corda -n corda `
charts/corda `
--values values-prereqs.yaml `
--values debug.yaml `
--wait
Note: The verification impacts performance and can be turned off, while still using the content of debug.yaml
by setting the flow.verifyInstrumentation
property to false
or removing it entirely.
To connect to the cluster DB from tooling on your local environment, do the following:
-
Port forward the PostgreSQL pod. For example:
Bash:
kubectl port-forward -n corda svc/prereqs-postgres 5434:5432 &
PowerShell:
Start-Job -ScriptBlock {kubectl port-forward -n corda svc/prereqs-postgres 5434:5432}
-
Fetch the superuser’s password from the Kubernetes secret:
Bash:
kubectl get secret prereqs-postgres -n corda \ -o go-template='{{ index .data "postgres-password" | base64decode }}'
PowerShell:
kubectl get secret prereqs-postgres -n corda ` -o go-template='{{ index .data \"postgres-password\" | base64decode }}'
-
Connect to the DB using your preferred database administration tool with the following properties:
- Host —
localhost
- Port —
5434
- Database —
cordacluster
- User —
postgres
- Password — as determined above
- Host —
If using Telepresence, you do not require port forwarding; simply connect using the hostname prereqs-postgres.corda
.
This example connects a Kafka client from outside the cluster, to Kafka running under Kubernetes.
-
Retrieve the password for the
admin
user:export KAFKA_PASSWORD=$(kubectl get secret -n {{ .Release.Namespace }} {{ include "corda-dev.kafkaName" . }} -o go-template='{{ `{{` }} index .data "admin-password" | base64decode {{ `}}` }}')
-
Generate the Kafka client properties file:
echo "security.protocol=SASL_PLAINTEXT" > client.properties echo "sasl.mechanism=PLAIN" >> client.properties echo "sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username=\"admin\" password=\"$KAFKA_PASSWORD\";" >> client.properties
-
Forward the Kafka port:
kubectl port-forward -n {{ .Release.Namespace }} $(kubectl get pods -n {{ .Release.Namespace }} --selector=app.kubernetes.io/component=kafka,app.kubernetes.io/instance={{ .Release.Name }} -o=name) 9094 &
-
Commands can then be run against the cluster, for example:
kafka-topics --list --bootstrap-server localhost:9094 --command-config client.properties
Kafdrop provides an (insecure) web-UI for browsing the contents of a Kafka cluster. In order to deploy kafdrop you'll need to git clone the Kafdrop repo, change into that directory, and make Corda your default namespace before running the command to deploy the container:
git clone https://github.com/obsidiandynamics/kafdrop && cd kafdrop
kubectl config set-context --current --namespace=corda
export KAFKA_PASSWORD=$(kubectl get secret -n corda prereqs-kafka -o go-template='{{ index .data "admin-password" | base64decode }}')
export KAFKA_PROPERTIES=$(echo -e "security.protocol=SASL_SSL\nssl.truststore.type=PEM\nsasl.mechanism=PLAIN\nsasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username=\"admin\" password=\"$KAFKA_PASSWORD\";" | base64)
export KAFKA_TRUSTSTORE=$(kubectl get secret -n corda prereqs-kafka -o go-template='{{ index .data "ca.crt" }}')
helm upgrade --install kafdrop chart --set kafka.brokerConnect=prereqs-kafka:9092 --set kafka.properties="$KAFKA_PROPERTIES" --set kafka.truststore="$KAFKA_TRUSTSTORE" -n corda
Now port forward that container to be able to connect to Kafdrop on localhost, If using telepresence then you'll not need this step.
kubectl port-forward -n corda svc/kafdrop 9000:9000 &
You should now be able to connect to Kafdrop on http://localhost:9000/.
An alternative to Kafdrop that appears to be better at displaying consumer groups is AKHQ.
Retrieve the Kafka password:
export KAFKA_PASSWORD=$(kubectl get secret prereqs-kafka -o go-template='{{ index .data "admin-password" | base64decode }}')
cat <<EOF > akhq.yaml
extraVolumes:
- name: tls
secret:
secretName: prereqs-kafka
extraVolumeMounts:
- name: tls
mountPath: /certs
configuration:
akhq:
connections:
kafka-prereqs:
properties:
bootstrap.servers: "prereqs-kafka:9092"
security.protocol: "SASL_SSL"
sasl.mechanism: "PLAIN"
ssl.truststore.type: "PEM"
sasl.jaas.config: "org.apache.kafka.common.security.plain.PlainLoginModule required username=\"admin\" password=\"$KAFKA_PASSWORD\";"
ssl.truststore.location: "/certs/ca.crt"
EOF
helm repo add akhq https://akhq.io/
helm upgrade --install akhq akhq/akhq -f akhq.yaml
Forward the AKHQ port to localhost:
kubectl port-forward deploy/akhq 8080 &
You should now be able to connect to AKHQ on http://localhost:8080/.
The quickest route to clean up is to delete the entire Kubernetes namespace:
kubectl delete ns corda
Alternatively, you can clean up the Helm releases, pre-install jobs, and the persistent volumes created by the pre-requisites as follows:
helm delete corda -n corda
helm delete prereqs -n corda
kubectl delete job --all -n corda
kubectl delete pvc --all -n corda
Usually, the above delete pvc
command also deletes the persistent volumes, but not always. You can check with:
kubectl get pv
You may have to delete some volumes explicitly. Assuming that this is the only K8S cluster you are running, you can delete all persistent volumes with this command. Only run this command if you are sure you want to delete all volumes.
kubectl delete pv --all
- Cloud Code plugin for Kubernetes in IntelliJ
- stern for following logs in multiple containers
- kubectx and kubens for switching Kubernetes context and namespace
- Lens for a shiny UI for interacting with your cluster
- k9s for a shiny CLI for interacting with your cluster