Skip to content

docs(conformance): explain parity testing and how it differs from E2E#386

Merged
vieiralucas merged 5 commits intomainfrom
docs/conformance-parity-section
Apr 14, 2026
Merged

docs(conformance): explain parity testing and how it differs from E2E#386
vieiralucas merged 5 commits intomainfrom
docs/conformance-parity-section

Conversation

@vieiralucas
Copy link
Copy Markdown
Member

@vieiralucas vieiralucas commented Apr 14, 2026

Summary

Document every test layer that contributes to fakecloud's conformance story, including tfacc and parity testing which are both new this week.

What changed

Rewrote the "layers of tests" intro on /docs/about/conformance/ from three categories to four, and added full sections on the two that didn't exist on the page yet.

The four layers

  • Conformance — "does fakecloud match AWS's API contract?" Generated from AWS's Smithy models. Already documented.
  • E2E — "does fakecloud work?" Exercises fakecloud's own surface (introspection, persistence, tick processors, warm containers) plus cross-service wiring. Already documented.
  • Parity — "does fakecloud behave the same as real AWS on the things they both do?" New section, covers the fakecloud-parity crate, the dual-backend harness, the pass/fail comparison rule of thumb, current 7-service coverage, the weekly cadence, and the full security posture (protected environment, OIDC, CODEOWNERS, scoped IAM).
  • tfacc — "does fakecloud pass HashiCorp's own Terraform provider acceptance tests?" New section, covers the fakecloud-tfacc crate, why it's the strongest single conformance signal (the tests were written by the Terraform team against real AWS, not by us), current 12-service coverage, the pinned v5.97.0 provider tag, the allow-list model vs. bblommers/localstack-terraform-test's deny-list, and the hard-fail-on-missing-toolchain behavior.

Why this matters

HN commenters and serious evaluators will want to understand how fakecloud claims 100% conformance. The three non-trivial questions are:

  1. "Why doesn't E2E run against AWS?" — answered by the philosophical distinction (E2E tests fakecloud's own surface, a lot of which doesn't exist on real AWS) plus the practical cost/speed argument.
  2. "Why is parity coverage only 7 services and weekly?" — answered by the budget and cadence explanation.
  3. "How do you know fakecloud matches AWS when you wrote the tests yourself?" — answered by tfacc, which runs tests written by HashiCorp, not by us.

The purpose of this PR is to give all three questions honest, specific, engineering-grounded answers in one place.

Test plan

  • Zola builds clean locally (44 pages, 0 orphans)
  • CI green

Three changes:

- Rewrite the "layers of tests" intro to name three categories
  (conformance, E2E, parity) and say what question each one answers.
  Conformance asks "does fakecloud match AWS's contract"; E2E asks
  "does fakecloud work"; parity asks "does fakecloud behave the same
  as real AWS on the things they both do".

- New "Parity testing against real AWS" section documenting the
  fakecloud-parity crate: how the dual-backend harness works, the
  pass/fail comparison rule of thumb, current coverage (7 services),
  the cadence (weekly for real-AWS), and the security posture
  (environment gate, OIDC, CODEOWNERS, scoped IAM).

- New "Why E2E isn't parity" subsection explaining why running the
  E2E suite against real AWS doesn't make sense: a lot of what E2E
  exercises is fakecloud-native surface (introspection, persistence,
  tick processors) that doesn't exist on AWS, and the parts that
  could run against AWS would be prohibitively expensive at
  every-push cadence.

The purpose is to let readers understand the engineering decision
behind why parity coverage is narrow and weekly rather than broad
and continuous.
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No issues found across 1 file

Missed this earlier. fakecloud-tfacc runs HashiCorp's own Terraform
provider acceptance tests against fakecloud — the upstream TestAcc*
functions from terraform-provider-aws, pinned to v5.97.0, covering 12
services today. This is actually the strongest single conformance
signal fakecloud has because the tests were written by the Terraform
team against real AWS, not by us.

Rewrite the layers intro from three categories to four, and add:

- A "Terraform provider acceptance tests (tfacc)" section explaining
  the harness, why it's the strongest conformance signal (tests aren't
  written by us and can't be gamed), current coverage, the allow-list
  model (inverted from bblommers/localstack-terraform-test's deny-list
  approach to match the parity-per-implemented-service invariant), and
  the hard-fail-on-missing-toolchain behavior.

- A final "four layers together" closer naming what each catches.
"Strongest single conformance signal" and "the test layer we pay the
most attention to" read as self-congratulatory. The interesting fact
is that we don't write the tests — state it, let the reader rank it.
- Conformance assertions aren't really "ours" — they're extracted from
  the Smithy model. The real difference between conformance and tfacc
  isn't authorship, it's kind: conformance checks shape, tfacc checks
  end-to-end resource-lifecycle behavior. Rewrite "What makes tfacc
  different" to say that honestly instead of framing it as ours-vs-theirs.
- Drop the v5.97.0 version pin detail and the "bumping the tag is a
  deliberate edit" paragraph. Readers don't need the implementation
  detail; they need the what and why.
Previous wording diminished conformance to make tfacc look different:
"exhaustive but about shape", "not are the shapes right? but does
fakecloud behave well enough", "conformance keeps fakecloud honest
about what it claims to return, tfacc about whether that's enough to
be useful". All three imply conformance is a weaker check.

Both conformance and tfacc are grounded in external authoritative
sources (AWS's Smithy models and HashiCorp's acceptance suite,
respectively) — they catch different classes of bug, but neither is
"weaker" than the other. Rewrite the "how they fit together" section
and the closing summary to present them as equally valuable.
@vieiralucas vieiralucas merged commit 09fc655 into main Apr 14, 2026
33 checks passed
@vieiralucas vieiralucas deleted the docs/conformance-parity-section branch April 14, 2026 18:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant