Skip to content

making/devsecops-demo

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

14 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Simple DevSecOps Demo on Kind

This tutorial shows how to build a simple DevSecOps pipeline using Tanzu Build Service, Harbor, Carvel and Concourse.

image

Clone this repository and change to that directory.

git clone https://github.com/tanzu-japan/devsecops-demo.git
cd devsecops-demo

This tutorial has been tested on Mac. It probably works on Linux with a few step changes. It does not work on Windows.

Generate certificates

First of all, generate a self-signed certificate that is used throughout this tutorial.

Run the following command.

docker run --rm \
 -v ${PWD}/certs:/certs \
 hitch \
 sh /certs/generate-certs.sh sslip.io

Let Laptop trust the generated CA certificate.

# https://blog.container-solutions.com/adding-self-signed-registry-certs-docker-mac
sudo security add-trusted-cert -d -r trustRoot -k ~/Library/Keychains/login.keychain certs/ca.crt

Don't forget to restart Docker after running the above command.

Setup kind cluster

Install Kind.

brew install kind

Confirmed to work with the following versions.

$ kind version
kind v0.11.1 go1.16.4 darwin/amd64

Create a Kubernetes cluster on Docker using Kind.

kind create cluster --config kind.yaml

Install Carvel tools

Install Carvel tools.

brew tap vmware-tanzu/carvel
brew install ytt kbld kapp imgpkg kwt vendir

or

curl -L https://carvel.dev/install.sh | bash

Install Kapp Controller

Install Kapp Controller. Add a ConfigMap for the Kapp Controller to trust the CA certificate generated above.

ytt -f https://github.com/vmware-tanzu/carvel-kapp-controller/releases/download/v0.20.0/release.yml \
  -f apps/kapp-controller-config.yaml \
  -v namespace=kapp-controller \
  --data-value-file ca_crt=./certs/ca.crt \
  | kubectl apply -f -

If you are running this tutorial using Tanzu Kubernetes Grid instead of Kind, run the following command instead.

ytt -f apps/kapp-controller-config.yaml \
-v namespace=tkg-system \
--data-value-file ca_crt=./certs/ca.crt \
| kubectl apply -f -

However, it is unsure whether this tutorial works with the version of Kapp Controller included in TKG.

Install Cert Manager

Install Cert Manager.

kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v0.16.1/cert-manager.yaml

Install Contour

Install Contour.

kubectl apply -f apps/contour.yaml

Run the following command and wait until Succeeded is output.

kubectl get app -n tanzu-system-ingress contour -o template='{{.status.deploy.stdout}}' -w

Run the following command and confirm that Reconcile succeeded is output.

$ kubectl get app -n tanzu-system-ingress contour 
NAME      DESCRIPTION           SINCE-DEPLOY   AGE
contour   Reconcile succeeded   30s            91s

Check Envoy's Cluster IP.

$ kubectl get service -n tanzu-system-ingress envoy                                                       
NAME    TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
envoy   NodePort   10.96.163.153   <none>        80:32676/TCP,443:31721/TCP   5m25s

Set this IP to a variable named ENVOY_CLUSTER_IP for later use.

ENVOY_CLUSTER_IP=$(kubectl get service -n tanzu-system-ingress envoy -o template='{{.spec.clusterIP}}')

Install Harbor

Set the Hostname to route requests to Harbor as follows:

HARBOR_HOST=harbor-$(echo $ENVOY_CLUSTER_IP | sed 's/\./-/g').sslip.io
# harbor-10-96-163-153.sslip.io

Install Harbor with the following command:

ytt -f apps/harbor.yaml \
    -v harbor_host=${HARBOR_HOST} \
    --data-value-file=harbor_tls_crt=./certs/server.crt \
    --data-value-file=harbor_tls_key=./certs/server.key \
    | kubectl apply -f -

Run the following command and wait until Succeeded is output.

kubectl get app -n tanzu-system-registry harbor -o template='{{.status.deploy.stdout}}' -w

Run the following command and confirm that Reconcile succeeded is output.

$ kubectl get app -n tanzu-system-registry harbor                                       
NAME     DESCRIPTION           SINCE-DEPLOY   AGE
harbor   Reconcile succeeded   39s            3m33s

Check if the FQDN of the HTTPProxy object matches HARBOR_HOST.

$ kubectl get httpproxy -n tanzu-system-registry
NAME                      FQDN                                   TLS SECRET   STATUS   STATUS DESCRIPTION
harbor-httpproxy          harbor-10-96-163-153.sslip.io          harbor-tls   valid    Valid HTTPProxy
harbor-httpproxy-notary   notary.harbor-10-96-163-153.sslip.io   harbor-tls   valid    Valid HTTPProxy

Change Kind's Containerd config.toml so that it uses the CA certificate generated above for this HARBOR_HOST.

docker exec kind-control-plane /etc/containerd/add-tls-containerd-config.sh ${HARBOR_HOST} /etc/containerd/certs.d/sslip.io.crt

Make sure that the change is reflected.

$ docker exec kind-control-plane crictl info | jq .config.registry.configs
{
  "harbor-10-96-163-153.sslip.io": {
    "auth": null,
    "tls": {
      "insecure_skip_verify": false,
      "caFile": "/etc/containerd/certs.d/sslip.io.crt",
      "certFile": "",
      "keyFile": ""
    }
  }
}

Then use kwt to make it accessible directly from the laptop into the k8s cluster. Run the following command in another terminal.

sudo -E kwt net start

Make sure you can access Harbor with curl.

curl -v --cacert certs/ca.crt https://${HARBOR_HOST} 

Log in to Harbor.

docker login ${HARBOR_HOST} -u admin -p admin

Restart Docker if you hit Error response from daemon: Get https://${HARBOR_HOST}/v2/: x509: certificate signed by unknown authority

Install Tanzu Build Service

Log in to Tanzu Network

docker login registry.pivotal.io

Create a project to store images for Tanzu Build Service in Harbor.

curl -u admin:admin --cacert ./certs/ca.crt  -XPOST "https://${HARBOR_HOST}/api/v2.0/projects" -H "Content-Type: application/json" -d "{ \"project_name\": \"tanzu-build-service\"}"

Run the following command to copy the Tanzu Build Service images from Tanzu Network to Harbor.

imgpkg copy -b registry.pivotal.io/build-service/bundle:1.2.1 --to-repo ${HARBOR_HOST}/tanzu-build-service/build-service --registry-ca-cert-path certs/ca.crt

Go to https://${HARBOR_HOST}/harbor/projects/2/repositories and make sure Tanzu Build Service images have been uploaded.

image

Then install Tanzu Build Service with the following command.

ytt -f apps/build-service.yaml \
    -v harbor_host=${HARBOR_HOST} \
    -v tanzunet_username="" \
    -v tanzunet_password="" \
    --data-value-file=ca_crt=./certs/ca.crt \
    | kubectl apply -f-

Run the following command and wait until Succeeded is output.

kubectl get app -n build-service build-service -o template='{{.status.deploy.stdout}}' -w

Run the following command and confirm that Reconcile succeeded is output.

$ kubectl get app -n build-service build-service 
NAME            DESCRIPTION           SINCE-DEPLOY   AGE
build-service   Reconcile succeeded   11s            2m10s

Create a Secret in default namespace that Tanzu Build Service uses to push built images to Harbor.

REGISTRY_PASSWORD=admin kp secret create harbor --registry ${HARBOR_HOST} --registry-user admin  

Create a project in Harbor to store images of the Demo application. Also, use only Builder for Java to reduce upload time.

curl -u admin:admin --cacert ./certs/ca.crt  -XPOST "https://${HARBOR_HOST}/api/v2.0/projects" -H "Content-Type: application/json" -d "{ \"project_name\": \"demo\"}"
PROJECT_ID=$(curl -s -u admin:admin --cacert ./certs/ca.crt "https://${HARBOR_HOST}/api/v2.0/projects?name=demo" | jq '.[0].project_id')
curl -u admin:admin --cacert ./certs/ca.crt  -XPUT "https://${HARBOR_HOST}/api/v2.0/projects/${PROJECT_ID}" -H "Content-Type: application/json" -d "{ \"metadata\": { \"auto_scan\" : \"true\" } }"

Upload ClusterBuilder / ClusterStore / ClusterStack to Tanzu Build Service. Here we intentionally upload an older version.

kp import -f descriptors/descriptor-100.0.69-java-only.yaml --registry-ca-cert-path certs/ca.crt 

To check the operation, we will build a simple application.

kp image save hello-servlet --tag ${HARBOR_HOST}/demo/hello-servlet --git https://github.com/making/hello-servlet.git --git-revision master --wait

Make sure the Build is successful.

$ kp build list

BUILD    STATUS     IMAGE                                                                                                                       REASON
1        SUCCESS    harbor-10-96-163-153.sslip.io/demo/hello-servlet@sha256:d278bc8511cff9553f2f08142766b4bfe12f58ba774a1c4e7c27b69afc3d0d79    CONFIG

Delete the image after checking the operation.

kp image delete hello-servlet

Install Concourse

Install Concourse with the following command.

ytt -f apps/concourse.yaml \
    --data-value-file=ca_crt=./certs/ca.crt \
    --data-value-file=ca_key=./certs/ca.key \
    | kubectl apply -f-

Run the following command and wait until Succeeded is output.

kubectl get app -n concourse concourse -o template='{{.status.deploy.stdout}}' -w

Run the following command and confirm that Reconcile succeeded is output.

$ kubectl get app -n concourse concourse    
NAME        DESCRIPTION           SINCE-DEPLOY   AGE
concourse   Reconcile succeeded   22s            101s

Check the ingress for Concourse.

$ kubectl get ing -n concourse
NAME            CLASS    HOSTS                          ADDRESS   PORTS     AGE
concourse-web   <none>   concourse-127-0-0-1.sslip.io             80, 443   117s

Install fly CLI as follows:

curl --cacert ./certs/ca.crt -sL "https://concourse-127-0-0-1.sslip.io/api/v1/cli?arch=amd64&platform=darwin" > fly
install fly /usr/local/bin/fly
rm -f fly

Log in to the Concourse.

fly -t demo login --ca-cert ./certs/ca.crt -c https://concourse-127-0-0-1.sslip.io -u admin -p admin

To check the operation, set a simple pipeline and execute the job.

curl -sL https://gist.github.com/making/6e8443f091fef615e60ea6733f62b5db/raw/2d26d962d36ab8639f0a9e8dccb100f57f610d9d/unit-test.yml > unit-test.yml 
fly -t demo set-pipeline -p unit-test -c unit-test.yml --non-interactive
fly -t demo unpause-pipeline -p unit-test
fly -t demo trigger-job -j unit-test/unit-test --watch
fly -t demo destroy-pipeline -p unit-test --non-interactive

DevSecOps pipeline

Generate an SSH key for use with GitOps.

ssh-keygen -t rsa -b 4096 -f ${HOME}/.ssh/devsecops

Fork https://github.com/tanzu-japan/hello-tanzu-config to your account.

image

Go to https://github.com/<YOUR_ACCOUNT>/hello-tanzu-config/settings/keys and configure $HOME/.ssh/devsecops.pub generated above as a deploy key.

Don't forget to check "Allow write access".

image

The following command creates a set of variables to pass to the Concourse pipeline.

cat <<EOF > pipeline-values.yaml
kubeconfig: |
$(kind get kubeconfig | sed -e 's/^/  /g' -e 's/127.0.0.1:.*$/kubernetes.default.svc.cluster.local/')
registry_host: ${HARBOR_HOST}
registry_project: demo
registry_username: admin
registry_password: admin
registry_ca: |
$(cat ./certs/ca.crt | sed -e 's/^/  /g')
app_name: hello-tanzu
app_source_uri: https://github.com/tanzu-japan/hello-tanzu.git
app_source_branch: main
app_config_uri: git@github.com:making/hello-tanzu-config.git # <--- CHANGEME
app_config_branch: main
app_config_private_key: |
$(cat ${HOME}/.ssh/devsecops | sed -e 's/^/  /g')
app_external_url: https://hello-tanzu-$(echo $ENVOY_CLUSTER_IP | sed 's/\./-/g').sslip.io
git_email: makingx+bot@gmail.com
git_name: making-bot
EOF

Change app_source_uri and app_config_uri according to your environment.

Set up the pipeline for DevSecOps with the following command:

fly -t demo set-pipeline -p devsecops -c devsecops.yaml -l pipeline-values.yaml --non-interactive
fly -t demo unpause-pipeline -p devsecops

The following jobs will be automatically triggered within 1 minute.

image

Make sure the unit-test job is successful and green.

image

deploy-to-k8s job should fail with the following message:

ytt: Error: Checking file 'app-config/demo/values.yaml': lstat app-config/demo/values.yaml: no such file or directory` .

This is as expected, so don't worry.

After a while kpack-build job will succeed and turn green.

image

You can check the kpack log at build time by checking kpack-build job.

image

Since the image was created by Tanzu Build Service (Kpack), so after a while vulnerability-scan job will be triggered automatically.

image

vulnerability-scan job should fail at this stage as it contains vulnerable dependencies.

image

You can find out why this job failed and where the unresolved vulnerabilities are by looking at the details of the vulnerability-scan job.

image

Upload newer ClusterBuilder / ClusterStore / ClusterStack to Tanzu Build Service.

kp import -f descriptors/descriptor-100.0.110-java-only.yaml --registry-ca-cert-path certs/ca.crt 

When the upload is complete, Tanzu Build Service will detect the change and automatically rebuild the image with newer dependencies. This will automatically trigger vulnerability-scan job again.

image

This time vulnerability-scan job will succeed and the changes are pushed to the forked hello-tanzu-config git repository.

image

By the time you run this tutorial, this dependencies may become obsolete and the vulnerability-scan job may fail.

Make sure the following file is pushed on Github.

image

Finally all the jobs were successful and turned green.

image

Go to app_external_url configured in pipeline-values.yaml with a browser.

image

Yeah, it works πŸ‘.

deploy-to-k8s job runs periodically every 5 minutes, and the manifests managed by git and the state on k8s are always synced by kapp (a.k.a GitOps).

Automatically update Tanzu Build Service dependencies.

TanzuNetDependencyUpdater which will allow your Tanzu Build Service Cluster to automatically update its dependencies when new dependency descriptors are published to TanzuNet since Tanzu Build Service 1.2.

Update Tanzu Build Service by configuring your Tanzu Network credentials.

TANZUNET_USERNAME=****
TANZUNET_PASSWORD=****

ytt -f apps/build-service.yaml \
    -v harbor_host=${HARBOR_HOST} \
    -v tanzunet_username="${TANZUNET_USERNAME}" \
    -v tanzunet_password="${TANZUNET_PASSWORD}" \
    --data-value-file=ca_crt=./certs/ca.crt \
    | kubectl apply -f-

Run the following command and wait until Succeeded is output. It will take a little longer.

kubectl get app -n build-service build-service -o template='{{.status.deploy.stdout}}' -w

Get the TanzuNetDependencyUpdater to make sure the description version is up to date.

$ kubectl get tanzunetdependencyupdater -n build-service 
NAME                 DESCRIPTORVERSION   READY
dependency-updater   100.0.122           True

Tanzu Build Service will detect the change and automatically rebuild the image with newer dependencies. This will automatically trigger vulnerability-scan job again.

image

Then update-config job will also be triggered automatically.

image

You can check the changed contents of the image on Github.

image

The updated image will be deployed to k8s.

image

With this pipeline, the image is automatically updated and shipped to k8s every time a new Stack, Store or Builder is released πŸ™Œ.

You can also prevent it from being automatically deploy to k8s by sending a pull request instead of pushing the changes directly to main branch.

(Bonus) Detects the use of vulnerable libraries

Fork https://github.com/tanzu-japan/hello-tanzu to your account.

image

Change app_config_uri in pipeline-values.yaml to the forked uri and update the pipeline

fly -t demo set-pipeline -p devsecops -c devsecops.yaml -l pipeline-values.yaml --non-interactive

Edit pom.xml in the forked repository and add a vulnerable dependency bellow inside <dependencies>:

		<dependency>
			<groupId>org.apache.commons</groupId>
			<artifactId>commons-collections4</artifactId>
			<version>4.0</version>
		</dependency>

image

then commit the change.

image

unit-test job will be triggered in less than 1min.

image

then kpack-build job will follow.

image

After the new image is pushed to Harbor, vulnerability-scan job will start.

image

The job should fail.

image

Because we intentionally used the vulnerable commons-collections 4.0 as reported 😈

image

Let's update the library and fix the vulnerability as follows:

		<dependency>
			<groupId>org.apache.commons</groupId>
			<artifactId>commons-collections4</artifactId>
			<version>4.4</version>
		</dependency>

Edit pom.xml and commit the change:

image

After unit-test and kpack-build succeeded again, vulnerability-scan job will resume.

image

Since the vulnerability has been fixed the job will be successful, and update-config job is started.

image

And the "safe image" will be shipped to k8s.

image


You've built a simple DevSecOps pipeline. Congratulations πŸŽ‰.

About

Simple DevSecOps Demo

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages