-
-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test suite deficiencies #324
Comments
Hi @KyleOndy, great summary.
|
Hi, what if we cluster the tests. This would allow us to stop those containers after we are done with testing that specific configuration. We would not have to totally abandon and rewrite using a different technology. |
@mwlczk What do we gain from clustering? could you explain be pro's? |
@johansmitsnl By clustering/grouping the tests inside(!) the bats-file we could run only the needed instance (env-combination) at the a time. We would not have to "run" 15 different instances at the same time and put pressure on the build-machines. |
in addition we could define a "small" set of standard tests and run them against every test-instance as stated by @KyleOndy just to verify the standard is working for every combination of ENV-variables. |
For me to understand, you would like to loop over the different env-combinations and run on each env the "default" tests and with some env specific tests? |
Yes, I want to loop through the different env-combinations. This would increase test coverage as well as lower the impact on the build machines and thus make the tests more stable. |
Seing that #1206 and #1459 are being worked on, and during my (many) hours of test not getting unstable results (except the one mentioned in #1459 a few times), I will close this issue and mark it as frozen. This issue was closed due to one or more of the following reasons:
If you think this happened by accident, or feel like this issue was not actually resolved, please feel free to re-open it. If there is an issue you could resolve in the meantime, please open a PR based on the current master branch so we can review it. |
As I started looking back into #256 and #250 some maintainability issues have become apparent within the test suite. Given the current set of tests, coverage is low since we are only testing the ability to send mail on the
mail
docker image.For instance, in #250, the test passes because we only test that the service is not running, however the ability to send an email is never tested, allowing the bug to be introduced.
As I see it, an ideal solution that would avoid the bugs above is to run through the basic functions of the mail server for each option that is configured.
Proposal
Possible implementation would need to be explored
A set of core tests that are common across all configurations
This is a mail server, so always test SMTP, IMAP, log configs, accounts and any other core services are always correctly working no matter what combination of environmental variables are set.
Each environmental variable would have a set of core tests associated with each possible value.
example:
SSL_TYPE
would have 5 sets of tests associated with it.SSL_TYPE=
=> verify no ssl is configuredSSL_TYPE=letsencrypt
=> verify Let's Encrypt certificatesSSL_TYPE=custom
=> verify custom certificatesSSL_TYPE=manual
=> verify custom locations of your SSL certificates for non-standard casesSSL_TYPE=self-signed
=> verify self-signed certificate workSome kind of preprocessorfor BATS that generates all tests cases
Once this information is defined, it would be possible to generate complete test coverage for every configuration (all combinations of environmental variables) by using the associated set of tests for each option being tested.
Possible Issues
The more I think about this particular solution, the messier it seems. Ultimately, I still think there are coverage issues that should be solved.
Thoughts?
The text was updated successfully, but these errors were encountered: