Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test suite deficiencies #324

Closed
KyleOndy opened this issue Sep 12, 2016 · 8 comments
Closed

Test suite deficiencies #324

KyleOndy opened this issue Sep 12, 2016 · 8 comments

Comments

@KyleOndy
Copy link
Contributor

KyleOndy commented Sep 12, 2016

As I started looking back into #256 and #250 some maintainability issues have become apparent within the test suite. Given the current set of tests, coverage is low since we are only testing the ability to send mail on the mail docker image.

For instance, in #250, the test passes because we only test that the service is not running, however the ability to send an email is never tested, allowing the bug to be introduced.

As I see it, an ideal solution that would avoid the bugs above is to run through the basic functions of the mail server for each option that is configured.


Proposal

Possible implementation would need to be explored

A set of core tests that are common across all configurations

This is a mail server, so always test SMTP, IMAP, log configs, accounts and any other core services are always correctly working no matter what combination of environmental variables are set.

Each environmental variable would have a set of core tests associated with each possible value.

example: SSL_TYPE would have 5 sets of tests associated with it.

  1. SSL_TYPE= => verify no ssl is configured
  2. SSL_TYPE=letsencrypt => verify Let's Encrypt certificates
  3. SSL_TYPE=custom => verify custom certificates
  4. SSL_TYPE=manual => verify custom locations of your SSL certificates for non-standard cases
  5. SSL_TYPE=self-signed => verify self-signed certificate work

Some kind of preprocessorfor BATS that generates all tests cases

Once this information is defined, it would be possible to generate complete test coverage for every configuration (all combinations of environmental variables) by using the associated set of tests for each option being tested.

Possible Issues

  1. Compexity in creating / maintaing a BATS preprocessorfor.
  2. The number of combinations grows exponentially, making testing all combinations infeasible.

The more I think about this particular solution, the messier it seems. Ultimately, I still think there are coverage issues that should be solved.

Thoughts?

@tomav
Copy link
Contributor

tomav commented Sep 12, 2016

Hi @KyleOndy, great summary.
Same conclusion here, but few points to move forward:

@mwlczk
Copy link
Contributor

mwlczk commented Jan 29, 2018

Hi, what if we cluster the tests. This would allow us to stop those containers after we are done with testing that specific configuration. We would not have to totally abandon and rewrite using a different technology.

@johansmitsnl
Copy link
Contributor

@mwlczk What do we gain from clustering? could you explain be pro's?

@mwlczk
Copy link
Contributor

mwlczk commented Jan 31, 2018

@johansmitsnl By clustering/grouping the tests inside(!) the bats-file we could run only the needed instance (env-combination) at the a time. We would not have to "run" 15 different instances at the same time and put pressure on the build-machines.
It is not a beautiful approach to solve the problem, but it is doable.

@mwlczk
Copy link
Contributor

mwlczk commented Jan 31, 2018

in addition we could define a "small" set of standard tests and run them against every test-instance as stated by @KyleOndy just to verify the standard is working for every combination of ENV-variables.

@johansmitsnl
Copy link
Contributor

For me to understand, you would like to loop over the different env-combinations and run on each env the "default" tests and with some env specific tests?
This would increase better test coverage on multiple env-combinations indeed and it looks like a good start.

@mwlczk
Copy link
Contributor

mwlczk commented Feb 1, 2018

Yes, I want to loop through the different env-combinations. This would increase test coverage as well as lower the impact on the build machines and thus make the tests more stable.

@georglauterbach
Copy link
Member

georglauterbach commented Sep 10, 2020

Seing that #1206 and #1459 are being worked on, and during my (many) hours of test not getting unstable results (except the one mentioned in #1459 a few times), I will close this issue and mark it as frozen.

This issue was closed due to one or more of the following reasons:

  1. Age
  2. Contributor inactivity
  3. The issue seems to be resolved

If you think this happened by accident, or feel like this issue was not actually resolved, please feel free to re-open it. If there is an issue you could resolve in the meantime, please open a PR based on the current master branch so we can review it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants