-
Notifications
You must be signed in to change notification settings - Fork 66
🌱 test: Improve registry+v1 render regression test #2103
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🌱 test: Improve registry+v1 render regression test #2103
Conversation
✅ Deploy Preview for olmv1 ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
9be93c2
to
f32fc0e
Compare
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #2103 +/- ##
==========================================
- Coverage 73.59% 72.84% -0.75%
==========================================
Files 78 79 +1
Lines 7260 7340 +80
==========================================
+ Hits 5343 5347 +4
- Misses 1567 1644 +77
+ Partials 350 349 -1
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
311dab9
to
36ae501
Compare
f7a2e82
to
0abdf8f
Compare
0abdf8f
to
1ea2527
Compare
@@ -1,7 +0,0 @@ | |||
## registry+v1 bundle generation regression tests |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we do not need the readme anymore
now, the code is documented and we can clarify on it directly to simplify
6d713af
to
8a66074
Compare
Makefile
Outdated
@@ -252,6 +246,10 @@ test-unit: $(SETUP_ENVTEST) envtest-k8s-bins #HELP Run the unit tests | |||
$(UNIT_TEST_DIRS) \ | |||
-test.gocoverdir=$(COVERAGE_UNIT_DIR) | |||
|
|||
.PHONY: test-regression | |||
test-regression: #HELP Run regression test | |||
go test -count=1 -v ./test/regression/... |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We now have a dedicated directory for regression tests, so it's easy to add more cases in the future.
It also makes it very clear what this test does and why it's there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This does not appear to be invoked in any CI, specifically, this is only included in test
, which is something we only do locally.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did not realise that we did not call make test
in the CI :-(
Good catcher 💯
Since we have a specific action for unit tests and can, in the future, expand the regression tests to cover other scenarios, I added a new action to call this one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tmshort could we move forward with this one now?
@@ -172,15 +172,9 @@ generate: $(CONTROLLER_GEN) #EXHELP Generate code containing DeepCopy, DeepCopyI | |||
$(CONTROLLER_GEN) --load-build-tags=$(GO_BUILD_TAGS) object:headerFile="hack/boilerplate.go.txt" paths="./..." | |||
|
|||
.PHONY: verify | |||
verify: k8s-pin kind-verify-versions fmt generate manifests crd-ref-docs generate-test-data #HELP Verify all generated code is up-to-date. Runs k8s-pin instead of just tidy. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If make verify
failed in CI, most people would just run it locally and push the changes — that’s normal since verify
usually runs things like go fmt
, vet
, and controller-gen
(as in all OF projects). But it was also re-generating regression test output — which should never change or at least very unlikely change. Then:
- If that output changed because of a bug, we could accidentally miss it and merge a regression.
- It also wasn’t clear to reviewers that files in
testdata/expected-manifests/
should never be edited or auto-regenerated. - Expecting everyone to remember to check that manually is risky and easy to miss.
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: perdasilva The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
cmd.Stdout = os.Stdout | ||
|
||
err := cmd.Run() | ||
require.NoError(t, err, "failed to generate manifests") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Spawning go run generate-manifests.go
inside your test will recompile the whole helper and is fairly slow. You could factor the logic in generate-manifests.go
into a package and call that from the test binary. This will surface compile-time errors directly in the test binary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the feedback!
You're right — using go run generate-manifests.go
is a bit slower since it recompiles the helper. I timed it, and it's around ~500ms to 2 seconds faster, but feels fine for this test since it only runs once, not per test case.
We now have a stronger check in place for renderer output, and I think it's solid enough to move forward with this version. I do agree that may have hall for improvements, but it might be more effort than it's worth at the moment.
That said, I’d be happy if someone wants to improve it later!
WDYT — okay to move forward for now?
|
||
if err := os.RemoveAll(outputRootDir); err != nil { | ||
if err := os.RemoveAll(*outputRootDir); err != nil { | ||
fmt.Printf("error removing output directory: %v\n", err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should this direct to stderr so it doesn't mix with stdout at some point, or maybe even import log
and log.Fatalf
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The test was failing because it couldn't remove a temporary directory.
I don’t think that should be a hard failure — if this happens in CI, the container will be cleaned up anyway when the job ends. It’s not critical to the test result itself.
I also added the directory to .gitignore
to make sure we don’t accidentally commit it in any case.
557687a
to
588183d
Compare
6a69f7c
to
ad34d8d
Compare
- uses: codecov/codecov-action@v5.4.3 | ||
with: | ||
disable_search: true | ||
files: coverage/unit.out | ||
flags: unit | ||
token: ${{ secrets.CODECOV_TOKEN }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we need the codecov action here. This could inadvertently trigger codecov to think it's received all its data before the (real) tests complete. If you want this here, you should increase the value here:
operator-controller/codecov.yml
Line 3 in e9c5b6e
after_n_builds: 3 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The number 3 corresponds to test-unit, test-e2e and test-experimental-e2e.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting now, I think I understand better why codev instead of the old coveralls :-)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually interesting, the experimental-e2e isn't creating it's own directory, it's sharing it with e2e, so I there' still 3 files, which I'm not sure if it corresponds to three pushes...
However, I'm not sure it's a good idea to code cover the test harness, as that gives a false impression of code coverage (either positive or negative).
Regardless, I'll let this pass.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we add more tests here, will we not be covering more scenarios and more areas of the code each time?
I think we need to fix the experimental-e2e. I can look at that in a follow-up.
Could you LGTM this one for we get it merged?
d561cd1
to
f90c10b
Compare
f90c10b
to
5a007e9
Compare
/lgtm |
1e513ca
into
operator-framework:main
Simplify and fix render regression test
make verify
.make verify
is used for codegen (go fmt
,controller-gen
, etc), not for regenerating testdata that should never change.See context
This PR:
testdata/
, following Go test conventions.Safer, clearer, and avoids mistakes in reviews.