Skip to content
This repository has been archived by the owner on Dec 25, 2023. It is now read-only.

Latest commit

 

History

History
211 lines (166 loc) · 5.57 KB

README.md

File metadata and controls

211 lines (166 loc) · 5.57 KB

Helm Chart Playground

Pipeline

Prerequisites

minikube, Skaffold, Helm, ct:

brew install minikube skaffold helm chart-testing

Optional, for a Lefthook based Git hooks setup:

brew install golangci-lint hadolint lefthook prettier yamllint && lefthook install

Development

The implementation was build on Kubernetes 1.26.3. Start a minikube cluster:

minikube start --kubernetes-version 1.26.3

Then start Skaffold in continuous watch mode:

skaffold dev

Testing

Terratest based tests (unit + integration):

eval $(minikube docker-env)
cd test
go test "./..."

Smoke test Helm chart:

ct install --chart-dirs . --charts charts/housekeeping

Note: integration and smoke tests at the moment assume an up and running Kubernetes cluster.

Observe results

kubectl logs -l service=housekeeping -n housekeeping -f

Deploy a non-compliant workload:

kubectl apply -f - << EOF
apiVersion: v1
kind: Pod
metadata:
  name: bad-nginx
spec:
  containers:
    - image: nginx
      name: nginx
EOF

Deploy a compliant workload:

kubectl apply -f - << EOF
apiVersion: v1
kind: Pod
metadata:
  name: good-nginx
  labels:
    team: test
spec:
  containers:
    - image: bitnami/nginx
      name: nginx
EOF

Output (evaluated once per minute):

{"pod":"bad-nginx","rule_evaluation":[{"name":"image_prefix","valid":false},{"name":"team_label_present","valid":false},{"name":"recent_start_time","valid":true}],"evaluated_at":"2023-05-25T15:09:45.398228881Z"}
{"pod":"good-nginx","rule_evaluation":[{"name":"image_prefix","valid":true},{"name":"team_label_present","valid":true},{"name":"recent_start_time","valid":true}],"evaluated_at":"2023-05-25T15:09:45.398848381Z"}

Installation

Install chart from ghcr repository:

helm upgrade --install --wait housekeeping oci://ghcr.io/carhartl/charts/housekeeping

Releases

Releases are published with a signed container image along with in-toto attestations for a succesful vulnerability scan and an SBOM (all of it using cosign keyless signing).

Verify vulnerability scan attestion:

cosign verify-attestation ghcr.io/carhartl/helm-chart-playground/housekeeping:af82115d5e3d54039de0d1d086aaec0e452e7969 --certificate-oidc-issuer=https://token.actions.githubusercontent.com --certificate-identity-regexp=carhartl --type vuln

Verify SBOM attestion:

cosign verify-attestation ghcr.io/carhartl/helm-chart-playground/housekeeping:af82115d5e3d54039de0d1d086aaec0e452e7969 --certificate-oidc-issuer=https://token.actions.githubusercontent.com --certificate-identity-regexp=carhartl --type spdxjson

Notes

Implementation

Assumptions:

  • We're not supposed to evaluate pods in the kube-system as well as the housekeeping namespace, where the service is deployed to.
  • We're not supposed to evaluate init containers.

Alternatives

The Kyverno policy engine allows to write such rules in a declarative way, similar to other Kubernetes resources. Rules can either be executed in audit mode or be strictly enforced via admission controller, with results being available as custom resources to inspect in the cluster (when in audit mode), as well as via Prometheus metrics.

To give an idea, here are the image prefix, team label and recent start time rules implemented as Kyverno policies (audit mode):

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: require-bitnami-image
spec:
  validationFailureAction: Audit
  background: true
  rules:
    - name: validate-image
      match:
        any:
          - resources:
              kinds:
                - Pod
      validate:
        message: "Only Bitnami images are allowed."
        pattern:
          spec:
            containers:
              - image: "bitnami/* | docker.io/bitnami/*"
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: require-team-label
spec:
  validationFailureAction: Audit
  background: true
  rules:
    - name: check-for-label
      match:
        any:
          - resources:
              kinds:
                - Pod
      validate:
        message: "The label `team` is required."
        pattern:
          metadata:
            labels:
              team: "?*"
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: require-recent-start-time
spec:
  validationFailureAction: Audit
  background: true
  rules:
    - name: check-for-time-of-pod-creation
      match:
        any:
          - resources:
              kinds:
                - Pod
      preconditions:
        any:
          - key: "{{ request.object.status.phase || '' }}"
            operator: Equals
            value: Running
      validate:
        message: "Pods running for more than a 1 week are prohibited."
        deny:
          conditions:
            all:
              - key: "{{ time_since('', '{{request.object.metadata.creationTimestamp}}', '') }}"
                operator: GreaterThan
                value: 168h

Not sure how to export the results as json log lines though :)