WIP status: Search for ?)
in this text to find pending decisions. Search for TODO to find pending work.
Kubernetes Assert is our re-think of build-contract, for Kubernetes. As usual we don't write tooling. We combine mainstream stuff that we find lean enough.
Summary:
- If you don't have a monitoring stack already, use the one from this repo. It's the Prometheus setup we use in dev clusters.
- Arrange your specs like in our example with a skaffold.yaml and kustomization.yaml.
- Our near term roadmap is to support "just put your spec files here" to run but we're not there yet. You need a Dockerfile.
- You need a way to build with Skaffold. Our examples use y-stack which is completely local, but any build method is fine.
- Run
skaffold dev
- Make sure Prometheus will scrape
assertions_failed
. - Watch for Alerts using for example
kubectl -n monitoring exec alertmanager-main-0 -c alertmanager -- wget -qO- http://127.0.0.1:9093/api/v2/alerts | jq
or the web interface, or a pager.
The idea is that any container that exports an assertions_failed
counter (OR gauge?) is a test.
Obviously the metric needs to be scraped, and your team must be alerted about any(?) non-zeroness.
Integration test results are inherently less binary than unit tests. You may want to tweak the parameters for alerting, maybe on a per test basis.
There won't be any tests if the dev loop is unpleasant. Our idea of a good dev loop is that while you're editing your specs, they re-run on save and you'll see the test output.
The goal however is that tests run unattended, and reliably call for your attention when they fail. At that point you'll want to know which test suite and test run that failed, and see the output.
Assumptions:
- Tests must run in-cluster. That's what they'll do during CI.
- Running them locally while developing would cause "works for me" issuess.
- Skaffold is a good enough dev loop tool.
- Because Skaffold is the dev loop tool,
skaffold run
is the CI tool. - The test environment is one of:
- ... so as the test author you should simply assume that whatever is in your skaffold.yaml's
deploy
section has been applied (you'll do the waiting), and the given context's current namespace is all yours.- TODO There'll be RBAC for read access, including access to logs.
We recommend using a runtime for tests to a) minimize bolerplate and b) aid "common code ownership" for specs by enforcing a structure.
Our first runtime for Kubernets Assert is based on Jest:
- BDD style
- Specs don't need to be compiled, we can copy source to the runtime.
- The test runner will translate Jest results to
assertions_failed
. Specs are free to export other metrics of any kind. - By design we'll always run in watch mode. If specs are static that's equal to one test run but staying alive to keep exporting metrics.
- Success criterias and cleanup is implemented by the pipeline.
How are specs delivered to the runtime?
- If your specs are a "project", i.e. have a package.json, you need a proper build step. Use the runtime as base image. Install to
/usr/src/specs
(?). See example TODO.- Note that Jest
--watch
(i.e. the runtime'sskaffold dev
) requires source to be in a git repo. Agit init
with no commits is fine, but remember to include your.gitignore
.
- Note that Jest
- If your specs are fine with the runtime's
dependencies
(feel free to have utility .js files alongside specs) they need to be mounted or copied to/usr/src/specs/src
(?)
How to avoid boilerplate?
- You still need yaml. But the actual workflow definition can be inherited from the runtime's Kustomize base
- github.com/Yolean/kubernetes-assert/runtime-nodejs/kustomize/?ref=[your choice]
. - Create your kustomization.yaml then run
skaffold init
. See example TODO.
Assuming that github.com/coreos/prometheus-operator/?ref=[a recent revision]
is already installed,
start from the example kustomize base:
kubectl apply -k example-small
kubectl -n monitoring create -k kubernetes-mixin-dashboards
kubectl apply -k grafana
Note how Prometheus will match rules and monitors using the label(s) that the kustomization.yaml adds.
A real stack might start from example-small and then:
- Change the replicas count for prometheus and alertmanager
- Change the retention for prometheus
now
to a bit longer - Add aggergation and long-term storage, presumably using Thanos
This repo needs to have some generated content, where upstream kustomize bases could not be found
docker-compose -f docker-compose.test.yml build --no-cache kubernetes-mixin
docker-compose -f docker-compose.test.yml up --no-build kubernetes-mixin
Build only:
NOPUSH=true IMAGE_NAME=solsson/kubernetes-assert:latest ./hooks/build
Integration test:
docker volume rm kubernetes-monitoring_admin 2> /dev/null || true
./test.sh
WIP. We tend to use y-stack but when working with test (in particular failing ones) it might help to reuse the CI stack.
compose='docker-compose -f docker-compose.test.yml -f docker-compose.dev-overrides.yml'
$compose down \
;docker volume rm kubernetes-monitoring_admin kubernetes-monitoring_k3s-server 2>/dev/null || true
sudo rm test/.kube/kubeconfig.yaml
$compose up -d sut
export KUBECONFIG=$PWD/test/.kube/kubeconfig.yaml
git push
# wait for https://hub.docker.com/r/solsson/kubernetes-assert, then
YOLEAN_PROMOTE=true IMAGE_NAME=solsson/kubernetes-assert:latest ./hooks/build
# update examples so that people get started from refs
grep -A 1 bases: runtime-nodejs/example-project/kustomization.yaml
grep FROM runtime-nodejs/example-project/Dockerfile
# validate examples