Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Our test result reporting does not take into account fails after test done #691

vjrantal opened this issue Mar 31, 2016 · 5 comments


Copy link

@vjrantal vjrantal commented Mar 31, 2016

Currently, then you run the full coordinated run on desktop, you might see the end report suggesting that everything passed, but in the logs, there might be output like:

DEBUG createPeerListener: error on incoming stream - Error: Channel destroyed

ok 331 response body should match testData

ok 332 must be started

not ok 333 .end() called twice

    operator: fail

    at: end Thali_CordovaPlugin/test/www/jxcore/bv_tests/testThaliMobileNativeWrapper.js:157:35)


The tape test report reports these as failures, but we don't, because we report to coordination server after each test if there were failures, but don't take into account in case a failure in test occurs after the test itself has completed.

This is not super critical and often the issue is just not cleaning up all events handlers at the end of each test. However, this results into tape test report suggest failures we our coordination server thinks all went fine.

Copy link
Member Author

@vjrantal vjrantal commented Mar 31, 2016

@yaronyg: From my perspective, this can be left to icebox, but wanted to create this issue so that you and others become aware of it.

@yaronyg yaronyg added the 0 - Icebox label Mar 31, 2016
Copy link

@yaronyg yaronyg commented Mar 31, 2016

Yeah, I saw this yesterday when I was running tests. I actually found and fixed the tests but this particular bug would be a real pain to fix since it requires us to go run back to the coordinate and undo a success. Certainly something for the icebox.

@yaronyg yaronyg added this to the V1 milestone Aug 3, 2016
@yaronyg yaronyg added 2 - Ready and removed 0 - Icebox labels Aug 3, 2016
Copy link

@yaronyg yaronyg commented Aug 3, 2016

@artemjackson This is a relatively easy first bug for you. The issue is that when we run a test in node there are three parts to each and every test. There is setup, the test and then tear down. When we run a 'coordinated' test our coordinated test framework which lives under test/TestServer waits for each device to connect to it and say "I'm ready to run setup". So the test server just waits until everyone says "I'm reading to run setup" and then once it gets o.k. from everyone it's been told to expect then it sends out a command 'run setup'. Once everyone says 'I ran setup' then it says 'run test' and same thing with 'run teardown'. It will keep going until the tests are done. All of this logic btw is defined in test/TestServer/UnitTestFramework.js.

The problem is that we have had failures in tear down that don't seem to be noticed by CI. Looking here it seems like we should notice when a test fails. And when I look here it looks like the client is sending information on setup and teardown success or failure. So what's going on?

Did the bug get fixed and we didn't notice?

So the immediate work item is to add tests to testUnitTestFramework that confirm if we throw an exception or if we fail a test (e.g. call for example inside of setup or teardown) then the unit test framework will properly notice this.

Then you need to setup a local test of the coordinator (e.g. the code in test/TestServer) to make sure that if we get an error in setup or teardown that this will be properly reflected in the test results.

@yaronyg yaronyg added 3 - Working and removed 2 - Ready labels Aug 3, 2016
Copy link

@artemjackson artemjackson commented Aug 5, 2016

@yaronyg I've added a test file that fires test failures during the setup and teardown.

According to manually running coordinated tests logs all failures were properly handled by UnitTestFramework.

So, there are some questions.

  1. Should I add an unit-test that checks proper flow of error handling in the UnitTestFramework?
  2. If so, there should be an ability to inject a testServer instead of hard-coding it. With injection of the testServer it's be possible to mock testServer for testing proper error handling of the UnitTestFramework.
@yaronyg yaronyg self-assigned this Aug 5, 2016
@yaronyg yaronyg added 1 - Backlog and removed 3 - Working labels Aug 5, 2016
Copy link

@yaronyg yaronyg commented Aug 5, 2016

We really need to write a test that validates that we properly detect failures in setup and teardown but it's not high enough priority at the moment given that manual testing shows that we handle it properly.

@yaronyg yaronyg removed their assignment Aug 9, 2016
@yaronyg yaronyg self-assigned this Aug 23, 2016
@yaronyg yaronyg added bug Node labels Sep 26, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
3 participants
You can’t perform that action at this time.