In this example directory we have an example setup on how to deploy and setup Hashicorp Vault in combination with Spire in a Kubernetes cluster. Using this infrastructure we are able to deploy our spiffe-vault
workload that allows us to interact with Hashicorp Vault using SPIFFE SVIDS.
- Kubernetes cluster (Docker Desktop, or any cloud provider hosted distribution like EKS, AKS or GKE.)
- Helm
- Terraform
In the k8s
folder you will find the Kubernetes deployments to deploy Spire and Hashicorp Vault.
In the vault
folder you will find the Terraform scripts to provision vault with some initial configuration.
To get started we will first have to deploy the core infrastructure to run our components.
We make use of some existing Helm charts. To do so we have to add these repositories.
helm repo add philips-labs https://philips-labs.github.io/helm-charts/
helm repo add hashicorp https://helm.releases.hashicorp.com
helm repo add traefik https://helm.traefik.io/traefik
helm repo update
Now we will deploy the Helm charts to our Kubernetes cluster. In case you run Rancher Desktop, Traefik will already be there and below script will check that for you.
helm -n spire-system upgrade spire philips-labs/spire --version 0.7.0 --create-namespace --install -f k8s/spire-values.yaml
kubectl describe ingressclasses.networking.k8s.io traefik ||
helm -n traefik-system upgrade traefik traefik/traefik --version 20.1.1 --create-namespace --install -f k8s/traefik-values.yaml
helm -n vault-system upgrade vault hashicorp/vault --version 0.22.1 --create-namespace --install -f k8s/vault-values.yaml
Note: Add
vault.localhost
to your hosts file (/etc/hosts
).As we deployed vault in development mode you can navigate to
http://vault.localhost
and login on the UI using the tokenroot
(You should never ever deploy vault in development mode to production environments).
Once the core infrastructure is deployed we will have to provision the authentication method to Vault. Terraform will also provision a transit engine which I use in the example below. Also note the Vault policy prevents you from doing any other operations then allowed by the policy. Doing so enables us to have finegrained access to different resources in Vault.
cd vault/environments/local
terraform init
terraform plan
terraform apply -auto-approve
Within kubernetes our Spire Helm chart also deploys the spire-k8s-workload-registrar. This Spire component takes care of registering workloads/pods with the Spire server. Once a workload is registered with the Spire Server it will be given a SPIFFE ID.
In k8s/spiffe-vault.yaml
we defined we want to use the ghcr.io/philips-labs/spiffe-vault-cosign
image that adds the Cosign binary in the image as well. So we can also play with cosign later in this example.
Let's build this custom build now and then deploy our workload to Kubernetes.
# from the example folder
docker build -t ghcr.io/philips-labs/spiffe-vault-cosign:latest spiffe-vault-cosign
helm -n my-app upgrade my-app ../charts/spiffe-vault --create-namespace --install -f k8s/spiffe-vault.yaml
Using our spiffe-vault
workload, which at this stage has a SPIFFE ID, we can now authenticate to Hashicorp Vault. Hashicorp Vault was configured to allow authentication via a JWT token with a given subject matching the SPIFFE ID.
The flow below will perform the following steps.
- Open a Shell to the
spiffe-vault
container in kubernetes. - Configure the VAULT_ADDR to point to our Vault deployment.
- Use the
spiffe-vault
cli-tool to perform the authentication to Vault using a Spire JWT and then export the VAULT_TOKEN in our current shell. - Interact with the Vault using the Vault cli.
$ kubectl exec -n my-app -i -t \
$(kubectl -n my-app get pods -l app.kubernetes.io/name=spiffe-vault -o jsonpath="{.items[0].metadata.name}") \
-c spiffe-vault -- sh
$ eval "$(spiffe-vault auth -role local)"
$ vault list transit/keys
Keys
----
cosign
$ vault read transit/keys/cosign
Key Value
--- -----
allow_plaintext_backup false
deletion_allowed false
derived false
exportable false
keys map[1:map[creation_time:2021-09-27T12:28:54.878899344Z name:P-256 public_key:-----BEGIN PUBLIC KEY-----
MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAERyHSkgCB+QrOLQEFU3W16Ir4pkir
YXNU+PgP2vEce1Klq0LfG792iLCNIODa/Jt3fw4Uu9dS7KVqM8XNsAlU1A==
-----END PUBLIC KEY-----
]]
latest_version 1
min_available_version 0
min_decryption_version 1
min_encryption_version 0
name cosign
supports_decryption false
supports_derivation false
supports_encryption false
supports_signing true
type ecdsa-p256
Please note that we configured vault to have a token lifetime of only 600 seconds. Before the token expires you will have to renew the token or retrieve a new one using spiffe-vault
.
A practical usecase for using the transit engine is for example in combination with Cosign. We can use it to create a signature without the need to download a signing key on our local system. We used a custom build of our spiffe-vault
image when deploying our app including Cosign. In the following workflow you might want to try the following with your personal dockerhub account, so replace my username with your own.
$ kubectl exec -n my-app -i -t \
$(kubectl -n my-app get pods -l app.kubernetes.io/name=spiffe-vault -o jsonpath="{.items[0].metadata.name}") \
-c spiffe-vault -- sh
$ docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username: marcofranssen
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
$ docker pull busybox
$ docker tag busybox marcofranssen/busybox:latest
$ docker push marcofranssen/busybox:latest
Using default tag: latest
The push refers to repository [docker.io/marcofranssen/busybox]
cfd97936a580: Mounted from library/busybox
latest: digest: sha256:febcf61cd6e1ac9628f6ac14fa40836d16f3c6ddef3b303ff0321606e55ddd0b size: 527
$ eval "$(spiffe-vault auth -role local)"
$ cosign sign --key hashivault://cosign marcofranssen/busybox:latest
WARNING: Image reference marcofranssen/busybox:latest uses a tag, not a digest, to identify the image to sign.
This can lead you to sign a different image than the intended one. Please use a
digest (example.com/ubuntu@sha256:abc123...) rather than tag
(example.com/ubuntu:latest) for the input to cosign. The ability to refer to
images by tag will be removed in a future release.
Pushing signature to: index.docker.io/marcofranssen/busybox
$ cosign verify --key hashivault://cosign marcofranssen/busybox:latest
Verification for index.docker.io/marcofranssen/busybox:latest --
The following checks were performed on each of these signatures:
- The cosign claims were validated
- The signatures were verified against the specified public key
[{"critical":{"identity":{"docker-reference":"index.docker.io/marcofranssen/busybox"},"image":{"docker-manifest-digest":"sha256:dacd1aa51e0b27c0e36c4981a7a8d9d8ec2c4a74bf125c0a44d0709497a522e9"},"type":"cosign container image signature"},"optional":null}]