Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conditional testing #27

Closed
benjaoming opened this issue Feb 4, 2016 · 9 comments
Closed

Conditional testing #27

benjaoming opened this issue Feb 4, 2016 · 9 comments
Assignees

Comments

@benjaoming
Copy link
Contributor

Sphinx link checking fails because of network timeouts (no docs changes and the whole test may still break). I think we know the story of BDD / Saucelabs random failures. I'm vouching we should consider conditional testing based on git commit changesets. I've mentioned it before.

This can speed up testing.

Of course we should still do full testing once in a while.

Something based on this command:

git show -s --no-notes --oneline 41dcb5e1720929f015b1e04c59fa30f78661daa4
@benjaoming benjaoming self-assigned this Feb 4, 2016
@MCGallaspy
Copy link
Contributor

Is there a graceful way to handle failed link checks, without failing the whole test suite?

@MCGallaspy
Copy link
Contributor

I'd rather keep running the whole test suite and just remove link checking, but it raises the question of when will we run "occasional" tests?

@benjaoming
Copy link
Contributor Author

@MCGallaspy why do you wanna remove it? It's extremely useful to avoid dead links in the docs, that's why Sphinx has that command built in ;)

it raises the question of when will we run "occasional" tests?

Yup, the conditional thing is conditional :D My suggestion is to base it off git changesets, mean if you change something in docs/ then we run the link checker.

@MCGallaspy
Copy link
Contributor

It's useful, but if it occasionally times out and fails the whole suite, its utility goes down in my eyes. I just want to ensure that all the unit tests/integration tests/etc are run regardless of the changeset, as opposed to just a specific app label. However if you're proposing that we only run the link checker when the docs are in the changeset, then I think that's a fine idea.

@benjaoming
Copy link
Contributor Author

It's useful, but if it occasionally times out and fails the whole suite, its utility goes down in my eyes.

But that's the point of this issue?

I just want to ensure that all the unit tests/integration tests/etc are run regardless of the changeset

If you have good heuristics, you don't need to bloat the CI :)

if you're proposing that we only run the link checker when the docs are in the changeset, then I think that's a fine idea.

That's the first proposal. But I hope we expand on it so we can make tests faster. I'd personally like a [skip-bdd] tag for commit messages :)

@MCGallaspy
Copy link
Contributor

Just a few points:

  • My opinion is that running all the tests for every PR is not bloat, but is rather using a test server precisely as intended. If anything, we should run a limited set of tests locally before pushing (if the test suit gets large enough to warrant it) and let the test server do the heavy work.
  • IMO link checking is of marginal utility, which is why I'm happy to only run it occasionally. But I'm opposed to only running tests occasionally. It creates the possibility of catching bugs too late.
  • Wanting to skip tests is a symptom of a poor test suite -- but Kolibri doesn't have a poor test suite. In fact, it's only got a handful of tests at all, and they run really fast and aren't brittle. When we introduce tests that have a large # of possible failure points (like browser tests with Selenium) let's be conscientious enough to design easily testable systems. (One thing that makes KA Lite's browser tests brittle is that the design is not loosely coupled, making it testable only with difficulty.)

Remember This is not KA Lite.

@benjaoming
Copy link
Contributor Author

Comments from the meeting:

  • JS tests are expected to be very fast
  • We shouldn't skip tests when it affects Coverage negatively
  • We can possibly do tests based on which branch is pushed to

@benjaoming
Copy link
Contributor Author

My opinion is that running all the tests for every PR is not bloat, but is rather using a test server precisely as intended. If anything, we should run a limited set of tests locally before pushing (if the test suit gets large enough to warrant it) and let the test server do the heavy work.

I really agree with this, it's a matter of adjusting it with very explicit exceptions. And if we cannot identify an exception to the rule (test everything), then we test everything :)

Reality has been quite cruel.. for KA Lite. Even though we tested stuff locally to avoid waiting 20-40 minutes for the CI, a lot of tears were spilled over shaky test frameworks.. because they caused communication work regarding false negatives.. and some traumatizing debugging :) First case of this same phenomena in Kolibri is the sphinx link checker.. which IMO is really useful but should only be executed when explicitly asking for a full set of tests or when changing .rst files.

Btw. I maintain an open source project in which only some of the tests were running but all lights were green... when we finally fixed the test discoverer, we had at least a couple severe issues :) -> https://github.com/django-money/django-money/pull/162/files ...with that said, I'm alert to putting too much complexity into the testing frameworks in case should be unknowing that our software has become Real Media Player because our test framework says otherwise :)

@benjaoming
Copy link
Contributor Author

It's done!

aronasorman added a commit that referenced this issue Aug 10, 2018
…-logly

Add extra logging when we raise exceptions
indirectlylit added a commit that referenced this issue Oct 6, 2018
Aypak pushed a commit to edulution/kolibri that referenced this issue Jul 22, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants