-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Revisiting finalizers #64
Comments
(finally chiming-in on this / #57 for the first time) I would very much like to have a way to always run a finalizer/teardown/etc. script on every test. I don't have a significant opinion in whether this is a tweak to the existing finalizer functionality, or if a new script (e.g.: Teardown) is introduced to provide this functionality. But for brevity, I'll simply use "finalizer" for the rest of this comment. Our tests often have significant impacts on the state of the system including stopping/starting daemons, and it can take some time for these to settle after a test completes (sometimes taking around 30 seconds or more). In particular, I would like to have the ability to run a finalizer script in btest that always runs, no matter what the outcome is of the test itself. For me, that script would wait for the system to stabilize and if it doesn't happen within a timeout, output some debugging information and fail the test. I have a test suite that currently consists of 659 btest scripts, and a significant fraction of them might impact the system in this way, and I'd like to avoid adding boilerplate to each and every test (in fact, it was @awelzel seeing that significant amount of boilerplate that led him to propose #57 initially). My key requirements:
|
This sounds good to me, incl. @deejgregor's requirements, except that I suggest we use a new keyword instead of repurposing |
|
This is a great suggestion. #57 tried that as a command-line parameter, but TEST_FAILED=0|1 in the environment is much cleaner. |
Great stuff, looks like we have a way forward then. |
* topic/christian/gh-64: Documentation for PartInitializer, Teardown, and PartTeardown. Add testcases for new functionality Add PartInitializer option Support for global and test-part teardowns Remove Python 2.7 from manifest
Fyi, I just released this as 0.71 so one has a minimum version to require when using the teardown. It'll be good to test this now to see if it works for everyone as planned. |
I'd still like to have a way to run finalizers reliably after any test, failing or succeeding. My use case would not be to check the environment for unexpected properties, but to shut down a setup created by the initializer.
We previously discussed this in #57 and pulled the plug at the time. I just looked at this for a bit and here's my take:
I think it would be acceptable to change the semantics of
Finalizer
(andPartFinalizer
!) to behave in this way, instead of adding a new option. I've looked over our Zeek projects, Corelight, and the Zeek package source and I've only found actual finalizer use in three public packages, all containing the same single script, checking the status of the environment after a test. Running this after a failing test wouldn't change the overall result.We can introduce a deprecation period to make this palatable for current finalizer users: for now, we simply flag the fact that finalizer semantics will change in an upcoming version when the test parser comes upon finalizers. (This output won't affect test outcomes via baselines etc, since it's not part of the command execution.)
I think it is fine to keep the semantics of the current finalizers, i.e., the fact that they can fail the test if they want to. It seems reasonable functionality to me, and I don't have to use it if I just want to run some activity. I just need to ensure I return success in any case.
We could start tracking the failure state of the current test via an environment variable, alongside the other
TEST_*
ones.TEST_FAILED
could become 1, say, whenever thefailures
counter becomes non-zero. That way, you can build finalizers that are sensitive to the outcome of the previous test sequence.As an aside, I think we should release 1.0.0 in the not too distant future and adopt semantic versioning. That could then be the time when we switch the finalizers to the new behavior.
@awelzel @deejgregor @bbannier @rsmmr (whoa :-) ... thoughts?
The text was updated successfully, but these errors were encountered: