Context
A fresh operator-sdk init + create api project scaffolds two test surfaces:
internal/controller/<kind>_controller_test.go — a Ginkgo skeleton on envtest + the controller-runtime client, stopping at a single Reconcile call with a // TODO(user): Add more specific assertions comment.
test/e2e/e2e_test.go — a Ginkgo suite driving everything through exec.Command("kubectl", ...) with stdout parsing.
Sawchain is a Go library (built on Chainsaw) with Gomega integration that works equally well for both: low-level integration tests against envtest and full e2e tests against a live cluster, using the same API on top of the same controller-runtime client. It offers:
- Readable, declarative assertions — YAML describes the expected resource state instead of repetitive/imperative
Get + field-by-field checks.
- Partial matching with JMESPath — assert only the fields you care about, and bind values across assertions.
- Ergonomic lifecycle helpers —
CreateAndWait, UpdateAndWait, DeleteAndWait bundle each write with an observability wait, so tests don't race against client-cache sync.
- Mix-and-match — YAML-driven and struct-based logic coexist in the same test; pick whichever fits each assertion.
I wanted to share it as something that could simplify and enhance both scaffolded test surfaces.
Controller integration tests (internal/controller/...)
Any realistic test past the scaffolded TODO quickly grows into repetitive client.Get + field-by-field assertion loops.
Traditional pattern (simplified from a PodSet controller example):
podSet := &v1.PodSet{
ObjectMeta: metav1.ObjectMeta{Name: "test-podset", Namespace: "default"},
Spec: v1.PodSetSpec{
Replicas: ptr.To(2),
Template: v1.Template{
Name: "test-pod",
Containers: []v1.Container{{Name: "test-app", Image: "test/app:v1"}},
},
},
}
Expect(k8sClient.Create(ctx, podSet)).To(Succeed())
Eventually(func() error {
if err := k8sClient.Get(ctx,
client.ObjectKeyFromObject(podSet), podSet); err != nil {
return err
}
if len(podSet.Status.Pods) != 2 {
return fmt.Errorf("expected 2 pods, got %d", len(podSet.Status.Pods))
}
return nil
}).Should(Succeed())
for _, podName := range podSet.Status.Pods {
pod := &corev1.Pod{}
Expect(k8sClient.Get(ctx, client.ObjectKey{Name: podName, Namespace: "default"}, pod)).To(Succeed())
Expect(pod.Spec.Containers).To(HaveLen(1))
Expect(pod.Spec.Containers[0].Name).To(Equal("test-app"))
Expect(pod.Spec.Containers[0].Image).To(Equal("test/app:v1"))
}
With Sawchain:
podSet := &v1.PodSet{}
sc.CreateAndWait(ctx, podSet, `
apiVersion: apps.example.com/v1
kind: PodSet
metadata:
name: test-podset
namespace: ($namespace)
spec:
replicas: 2
template:
name: test-pod
containers:
- name: test-app
image: test/app:v1
`)
Eventually(sc.FetchSingleFunc(ctx, podSet)).Should(HaveField("Status.Pods", ConsistOf(
"test-pod-0",
"test-pod-1",
)))
for _, podName := range podSet.Status.Pods {
Eventually(sc.CheckFunc(ctx, `
apiVersion: v1
kind: Pod
metadata:
name: ($name)
namespace: ($namespace)
spec:
containers:
- name: test-app
image: test/app:v1
`, map[string]any{"name": podName})).Should(Succeed())
}
CreateAndWait handles lifecycle, CheckFunc expresses declarative assertions with partial matching and JMESPath, and FetchSingleFunc polls state. There are also helpers like HaveStatusCondition, MatchYAML, and many more — check out the design overview for the full capabilities.
E2E tests (test/e2e/...)
The scaffolded e2e suite verifies cluster state by shelling out to kubectl and parsing the output: fragile, deeply coupled to kubectl's output shape, and hard to extend.
Scaffolded pattern (from e2e_test.go):
verifyCurlUp := func(g Gomega) {
cmd := exec.Command("kubectl", "get", "pods", "curl-metrics",
"-o", "jsonpath={.status.phase}",
"-n", namespace)
output, err := utils.Run(cmd)
g.Expect(err).NotTo(HaveOccurred())
g.Expect(output).To(Equal("Succeeded"), "curl pod in wrong status")
}
Eventually(verifyCurlUp, 5*time.Minute).Should(Succeed())
With Sawchain (same controller-runtime client, pointed at the live cluster):
Eventually(sc.CheckFunc(ctx, `
apiVersion: v1
kind: Pod
metadata:
name: curl-metrics
namespace: ($namespace)
status:
phase: Succeeded
`, map[string]any{"namespace": namespace})).Should(Succeed())
The same pattern replaces the rest of the scaffold's kubectl-shell-out sprawl: namespace setup, pod polling, endpoint checks, token minting. No exec.Command, no stdout parsing, no jsonpath templates.
More
I would love to hear whether this kind of integration could be valuable. Happy to discuss further! 😄
Context
A fresh
operator-sdk init+create apiproject scaffolds two test surfaces:internal/controller/<kind>_controller_test.go— a Ginkgo skeleton on envtest + the controller-runtime client, stopping at a singleReconcilecall with a// TODO(user): Add more specific assertionscomment.test/e2e/e2e_test.go— a Ginkgo suite driving everything throughexec.Command("kubectl", ...)with stdout parsing.Sawchain is a Go library (built on Chainsaw) with Gomega integration that works equally well for both: low-level integration tests against envtest and full e2e tests against a live cluster, using the same API on top of the same controller-runtime client. It offers:
Get+ field-by-field checks.CreateAndWait,UpdateAndWait,DeleteAndWaitbundle each write with an observability wait, so tests don't race against client-cache sync.I wanted to share it as something that could simplify and enhance both scaffolded test surfaces.
Controller integration tests (
internal/controller/...)Any realistic test past the scaffolded TODO quickly grows into repetitive
client.Get+ field-by-field assertion loops.Traditional pattern (simplified from a PodSet controller example):
With Sawchain:
CreateAndWaithandles lifecycle,CheckFuncexpresses declarative assertions with partial matching and JMESPath, andFetchSingleFuncpolls state. There are also helpers likeHaveStatusCondition,MatchYAML, and many more — check out the design overview for the full capabilities.E2E tests (
test/e2e/...)The scaffolded e2e suite verifies cluster state by shelling out to
kubectland parsing the output: fragile, deeply coupled tokubectl's output shape, and hard to extend.Scaffolded pattern (from
e2e_test.go):With Sawchain (same controller-runtime client, pointed at the live cluster):
The same pattern replaces the rest of the scaffold's kubectl-shell-out sprawl: namespace setup, pod polling, endpoint checks, token minting. No
exec.Command, no stdout parsing, no jsonpath templates.More
I would love to hear whether this kind of integration could be valuable. Happy to discuss further! 😄