Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GPII-2493: Added an `npm test` script, including code coverage report… #131

Merged
merged 15 commits into from Apr 19, 2018

Conversation

Projects
None yet
4 participants
@the-t-in-rtf
Copy link
Member

the-t-in-rtf commented Jul 17, 2017

See GPII-2493 for details.

@the-t-in-rtf

This comment has been minimized.

Copy link
Member Author

the-t-in-rtf commented Jul 17, 2017

In looking at the code coverage report here, it seems like it has the same shortcoming as the early work on universal, i.e. it excludes the contents of gpii/node_modules from the reports. Looking at that now.

@gpii-bot

This comment has been minimized.

Copy link
Collaborator

gpii-bot commented Jul 17, 2017

CI job failed. Please visit http://lists.gpii.net/pipermail/ci/ for more details.

the-t-in-rtf added some commits Jul 17, 2017

@gpii-bot

This comment has been minimized.

Copy link
Collaborator

gpii-bot commented Jul 17, 2017

CI job failed. Please visit http://lists.gpii.net/pipermail/ci/ for more details.

@the-t-in-rtf

This comment has been minimized.

Copy link
Member Author

the-t-in-rtf commented Jul 17, 2017

@avtar @gtirloni and Alfredo, although this pull should be testable with the existing commands, IMO right before we merge this pull we should update the CI configuration as we did with universal. This will give us consistent runs of the acceptance tests and also code coverage reports.

@gpii-bot

This comment has been minimized.

Copy link
Collaborator

gpii-bot commented Jul 17, 2017

CI job failed. Please visit http://lists.gpii.net/pipermail/ci/ for more details.

@gpii-bot

This comment has been minimized.

Copy link
Collaborator

gpii-bot commented Jul 18, 2017

CI job passed.

@the-t-in-rtf

This comment has been minimized.

Copy link
Member Author

the-t-in-rtf commented Jul 19, 2017

In working on the related infusion pull, I discovered a timing bug in gpii-testem that might also affect this pull: GPII-2507

Please hold off on reviewing until I can address this.

@the-t-in-rtf

This comment has been minimized.

Copy link
Member Author

the-t-in-rtf commented Aug 2, 2017

Please disregard, the windows tests don't use Testem at all. I did update to pick up the version of Istanbul with line numbers. Anyway, ready for review.

@amb26

This comment has been minimized.

Copy link
Member

amb26 commented Oct 9, 2017

Please could you update this to current master since there are a few spurious diffs showing in this pull view

@the-t-in-rtf

This comment has been minimized.

Copy link
Member Author

the-t-in-rtf commented Oct 10, 2017

Unfortunately after merging I now get a few test failures in the latest VM:

05:30:59.553:  jq: FAIL: Module "Processes Bridge for Windows module" Test name "Test findProcessByPid() against the 'process' object." - Messa
ge: Node process 'argv' length'
05:30:59.553:  jq: Expected: 3
05:30:59.553:  jq: Actual: 15
05:30:59.553:  jq: Source:     at Object.assertEquals (V:\node_modules\infusion\tests\test-core\jqUnit\js\jqUnit.js:137:19)
    at Object.<anonymous> (V:\gpii\node_modules\processReporter\test\processesBridge_tests.js:9:3069)
    at Object.run (V:\node_modules\infusion\tests\lib\qunit\js\qunit.js:207:18)
    at V:\node_modules\infusion\tests\lib\qunit\js\qunit.js:365:10
    at process (V:\node_modules\infusion\tests\lib\qunit\js\qunit.js:1457:24)
    at Timeout._onTimeout (V:\node_modules\infusion\tests\lib\qunit\js\qunit.js:483:5)
    at ontimeout (timers.js:365:14)
    at tryOnTimeout (timers.js:237:5)
    at Timer.listOnTimeout (timers.js:207:5)
05:30:59.553:  jq: FAIL: Module "Processes Bridge for Windows module" Test name "Test findProcessByPid() against the 'process' object." - Messa
ge: Node process argv[0]
05:30:59.553:  jq: Expected: node
05:30:59.553:  jq: Actual: node.exe
05:30:59.553:  jq: Source:     at Object.assertEquals (V:\node_modules\infusion\tests\test-core\jqUnit\js\jqUnit.js:137:19)
    at Object.<anonymous> (V:\gpii\node_modules\processReporter\test\processesBridge_tests.js:9:3465)
    at Object.run (V:\node_modules\infusion\tests\lib\qunit\js\qunit.js:207:18)
    at V:\node_modules\infusion\tests\lib\qunit\js\qunit.js:365:10
    at process (V:\node_modules\infusion\tests\lib\qunit\js\qunit.js:1457:24)
    at Timeout._onTimeout (V:\node_modules\infusion\tests\lib\qunit\js\qunit.js:483:5)
    at ontimeout (timers.js:365:14)
    at tryOnTimeout (timers.js:237:5)
    at Timer.listOnTimeout (timers.js:207:5)
@gpii-bot

This comment has been minimized.

Copy link
Collaborator

gpii-bot commented Oct 10, 2017

GPII-2493: Merged in upstream changes and devised a new strategy for …
…successfully running tests with coverage reports.
@gpii-bot

This comment has been minimized.

Copy link
Collaborator

gpii-bot commented Dec 7, 2017

@the-t-in-rtf

This comment has been minimized.

Copy link
Member Author

the-t-in-rtf commented Dec 7, 2017

@amb26, after lots of trial and lots more error, I figured out a strategy that allows us to:

  1. Generate coverage reports for both unit and acceptance tests using nyc.
  2. Run the same tests with or without nyc.

You can see a sample report on my GitHub pages.

Please note, running refreshenv alone is no longer enough to ensure that the tests will run repeatedly. At least on the Windows 10 VM, test runs only complete if I run the unit tests and then the acceptance tests, and that only in a new shell. I'm copying in @stegru to help investigate.

@the-t-in-rtf

This comment has been minimized.

Copy link
Member Author

the-t-in-rtf commented Dec 7, 2017

Here's the unit test failures that occur if I've already run the acceptance tests in the same shell:

03:10:54.153:  SPI call has failed
03:10:54.153:  jq: FAIL: Module "SpiSettingsHandler Module" Test name "SpiSettingsHandler test - testNonClientMetrics" - Message: Timed out wa
iting for the SPI call to complete (see GPII-2001)
03:10:54.153:  jq: Source:     at pok (V:\node_modules\infusion\tests\test-core\jqUnit\js\jqUnit.js:112:15)
    at Object.fail (V:\node_modules\infusion\tests\test-core\jqUnit\js\jqUnit.js:129:13)
    at Array.<anonymous> (V:\gpii\node_modules\spiSettingsHandler\test\testSpiSettingsHandler.js:15:6699)
    at Object.that.complete (V:\node_modules\infusion\src\framework\core\js\FluidPromises.js:70:25)
    at Array.that.reject (V:\node_modules\infusion\src\framework\core\js\FluidPromises.js:61:22)
    at Object.that.complete (V:\node_modules\infusion\src\framework\core\js\FluidPromises.js:70:25)
    at Array.that.reject (V:\node_modules\infusion\src\framework\core\js\FluidPromises.js:61:22)
    at Object.that.complete (V:\node_modules\infusion\src\framework\core\js\FluidPromises.js:70:25)
    at Object.that.reject (V:\node_modules\infusion\src\framework\core\js\FluidPromises.js:61:22)
    at ChildProcess.<anonymous> (V:\gpii\node_modules\spiSettingsHandler\src\SpiSettingsHandler.js:64:65)
    at emitTwo (events.js:106:13)
    at C .... [output suppressed at 1024 chars - for more output, increase fluid.logObjectRenderChars]
03:10:54.184:  jq: FAIL: Module "SpiSettingsHandler Module" Test name "SpiSettingsHandler test - testNonClientMetrics" - Message: Expected 7 a
ssertions, but 1 were run
03:10:54.184:  jq: Source:     at Object.asyncTest (V:\node_modules\infusion\tests\lib\qunit\js\qunit.js:405:9)
    at V:\gpii\node_modules\spiSettingsHandler\test\testSpiSettingsHandler.js:15:5518
    at Object.fluid.each (V:\node_modules\infusion\src\framework\core\js\Fluid.js:513:17)
    at Object.<anonymous> (V:\gpii\node_modules\spiSettingsHandler\test\testSpiSettingsHandler.js:15:5436)
    at Module._compile (module.js:570:32)
    at Module.replacementCompile (V:\node_modules\nyc\node_modules\append-transform\index.js:58:13)
    at module.exports (V:\node_modules\nyc\node_modules\default-require-extensions\js.js:8:9)
    at Object.<anonymous> (V:\node_modules\nyc\node_modules\append-transform\index.js:62:4)
    at Module.load (module.js:487:32)
    at tryModuleLoad (module.js:446:12)
    at Function.Module._load (module.js:438:3)
    at Module.require (module.js:497:17)
    at require (internal/module.js:20:19)
    at Object.<anonymous> (V:\tests\UnitTests.js:19:1)
    at Module._compile (module.js:570:32)
    at Module.r .... [output suppressed at 1024 chars - for more output, increase fluid.logObjectRenderChars]
03:10:54.184:  jq: Test concluded - Module "SpiSettingsHandler Module" Test name "SpiSettingsHandler test - testNonClientMetrics": 0/2 passed
- FAIL
03:10:54.184:  SPI settings handler SET returning results {
    "StickyKeysAvailable": {
        "oldValue": {
            "value": true,
            "path": "pvParam.dwFlags.SKF_AVAILABLE"
        },
        "newValue": {
            "value": true,
            "path": "pvParam.dwFlags.SKF_AVAILABLE"
        }
    },
    "HotkeyActive": {
        "oldValue": {
            "value": true,
            "path": "pvParam.dwFlags.SKF_HOTKEYACTIVE"
        },
        "newValue": {
            "value": false,
            "path": "pvParam.dwFlags.SKF_HOTKEYACTIVE"
        }
    }
}
03:10:54.199:  SPI settings handler SET returning results {
    "StickyKeysAvailable": {
        "oldValue": {
            "value": true,
            "path": "pvParam.dwFlags.SKF_AVAILABLE"
        },
        "newValue": {
            "value": true,
            "path": "pvParam.dwFlags.SKF_AVAILABLE"
        }
    },
    "HotkeyActive": {
        "oldValue": {
            "value": false,
            "path": "pvParam.dwFlags.SKF_HOTKEYACTIVE"
        },
        "newValue": {
            "value": true,
            "path": "pvParam.dwFlags.SKF_HOTKEYACTIVE"
        }
    }
}
@stegru

This comment has been minimized.

Copy link
Member

stegru commented Dec 7, 2017

ok to test

@gpii-bot

This comment has been minimized.

Copy link
Collaborator

gpii-bot commented Dec 7, 2017

@the-t-in-rtf

This comment has been minimized.

Copy link
Member Author

the-t-in-rtf commented Dec 7, 2017

@stegru, there wasn't an npm test command before, but in theory the previous commands should run, just without coverage. Let's see!

@gpii-bot

This comment has been minimized.

Copy link
Collaborator

gpii-bot commented Dec 7, 2017

@the-t-in-rtf

This comment has been minimized.

Copy link
Member Author

the-t-in-rtf commented Dec 7, 2017

Is anyone else seeing failures like the CI environment? I see nothing like that in Vagrant on my local machine.

@the-t-in-rtf

This comment has been minimized.

Copy link
Member Author

the-t-in-rtf commented Dec 13, 2017

Can I get another "ok to test" from @stegru or someone? I made no changes that should have affected anything between the commit where the build passed and the latest failure, I just took material out of the coverage report. The same tests should have been run in both.

@the-t-in-rtf

This comment has been minimized.

Copy link
Member Author

the-t-in-rtf commented Dec 13, 2017

Hmm, apparently I can also trigger one. Neat.

@gpii-bot

This comment has been minimized.

Copy link
Collaborator

gpii-bot commented Dec 13, 2017

@the-t-in-rtf

This comment has been minimized.

Copy link
Member Author

the-t-in-rtf commented Jan 19, 2018

I have made no changes, the previous failures seem to be unique to the CI environment. This should be ok to test again.

@the-t-in-rtf

This comment has been minimized.

Copy link
Member Author

the-t-in-rtf commented Mar 23, 2018

@amb26, I was able to run the tests successfully after merging. I started by cleaning out the host's node_modules directory and then launched a clean Vagrant VM and ran the tests there.

@gpii-bot

This comment has been minimized.

Copy link
Collaborator

gpii-bot commented Mar 23, 2018

@the-t-in-rtf

This comment has been minimized.

Copy link
Member Author

the-t-in-rtf commented Mar 23, 2018

Goofy merge of the package.json, fixing it now.

@gpii-bot

This comment has been minimized.

Copy link
Collaborator

gpii-bot commented Mar 23, 2018

@amb26

This comment has been minimized.

Copy link
Member

amb26 commented Mar 23, 2018

ok to test

@gpii-bot

This comment has been minimized.

Copy link
Collaborator

gpii-bot commented Mar 23, 2018

@amb26

This comment has been minimized.

Copy link
Member

amb26 commented Mar 23, 2018

When running these tests in my own Windows 10 vagrant box, I saw the following failure in the transcript:

08:04:13.042:  jq: Test concluded - Module "SpiSettingsHandler Module" Test name "SpiSettingsHandler API test - ": 2/2 passed - PASS
08:04:13.250:  jq: Test concluded - Module "gpii.tests.windows.processHandling" Test name "Testing findProcessByName": 7/7 passed - PASS
08:04:13.373:  jq: Test concluded - Module "gpii.tests.windows.processHandling" Test name "Testing isProcessRunning": 5/5 passed - PASS
08:04:14.000:  jq: Test concluded - Module "gpii.tests.windows.processHandling" Test name "Testing timeout waiting for processes start": 1/1 passed - PASS
08:04:14.718:  jq: Test concluded - Module "gpii.tests.windows.processHandling" Test name "Testing timeout waiting for processes termination": 1/1 passed - PASS
08:04:16.061:  jq: Test concluded - Module "gpii.tests.windows.processHandling" Test name "Testing waiting for processes start and end": 3/3 passed - PASS
08:04:16.093:  Executing C:\Users\vagrant\AppData\Local\Temp\gpii-process-handling-test.exe killProcessByNameTest /T 30
08:04:16.752:  Exit code:9
08:04:16.752:  Program output:""
08:04:16.767:  Program stderr:""
08:04:16.813:  jq: Test concluded - Module "gpii.tests.windows.processHandling" Test name "Testing Killing Processes": 3/3 passed - PASS
08:04:16.828:  Executing C:\gpii-windows\gpii\node_modules\processHandling\test\test-window.exe -window
08:04:16.887:  Exit code:1
08:04:16.887:  jq: FAIL: Module "gpii.tests.windows.processHandling" Test name "Testing closeProcessByName" - Message: Exit code
08:04:16.887:  jq: Expected: 5
08:04:16.887:  jq: Actual: 1
08:04:16.887:  jq: Source:     at Object.assertEquals (C:\gpii-windows\node_modules\infusion\tests\test-core\jqUnit\js\jqUnit.js:137:19)
    at C:\gpii-windows\gpii\node_modules\processHandling\test\testProcessHandling.js:190:16
    at ChildProcess.exithandler (child_process.js:213:5)
    at emitTwo (events.js:106:13)
    at ChildProcess.emit (events.js:191:7)
    at maybeClose (internal/child_process.js:877:16)
    at Process.ChildProcess._handle.onexit (internal/child_process.js:226:5)

> gpii-windows@0.3.0 test:acceptance C:\gpii-windows
> nyc node.exe ./tests/AcceptanceTests.js builtIn
etc.

but the overall run did not finish in an error. This suggests that we may have i) a possibly intermittent or configuration-dependent test failure, ii) some problem in propagating exit codes to top level in the case of a test failure

@amb26

This comment has been minimized.

Copy link
Member

amb26 commented Mar 23, 2018

@stegru - any idea what kind of causes might give the processHandling test failure above?

@stegru

This comment has been minimized.

Copy link
Member

stegru commented Mar 23, 2018

Looks like something happened trying to execute a test process, test-window.exe.

It should be built during the provisioning - perhaps it wasn't?

You might want to check if it exists: C:\gpii-windows\gpii\node_modules\processHandling\test\test-window.exe

@amb26

This comment has been minimized.

Copy link
Member

amb26 commented Mar 26, 2018

@stegru - this is absolutely right. I managed to find the voodoo to invoke "Build.ps1" from the command line and the tests now pass.
@the-t-in-rtf - so we have a few problems -
a) This actually represents a great mechanism to demonstrate the fault in the current npm script invocation - if you delete the file "test-window.exe" created by the provisioning script, this should cause a test failure which should prevent the Acceptance tests from even starting if a nonzero code should be received by the && operator. However, for some reason, the nyc invocation doesn't seem to be passing this on
b) Now we have a standard "npm test" script, we should ensure that the standard expectations of it are met - i.e. that when the module is checked out on an appropriate platform (Windows 10 with suitable dev tools) and we run "npm install" and "npm test", the tests actually pass (correctly). This means we should try to get the prerequisite script for building "test-window.exe" invoked as a suitable npm script, either as postinstall or pretest. This is currently held within provisioning/Build.ps1
c) The merge is still goofy in that the whole contents of the "listeners" directory appear to have come back

@stegru

This comment has been minimized.

Copy link
Member

stegru commented Mar 26, 2018

I've moved it into npm install, as part of another PR: c6115a7

@the-t-in-rtf

This comment has been minimized.

Copy link
Member Author

the-t-in-rtf commented Mar 27, 2018

Thanks, @stegru. Just to track back from that commit, here's the relevant PR: #167

@amb26, I have tidied up the extraneous listener content and to help prevent anyone else from mistakenly checking it in, I added that directory to the .gitignore file.

As to the missing build step, absolutely agree that this should be part of the npm scripts. I will comment on that pull, as I noticed that the script is run twice there when working with Vagrant.

So, as this pull stands, the tests will pass in Vagrant but not in bare iron unless you run the build steps yourself. Given that this is also broken in master, I would suggest merging this pull separately and reviewing Steve's pull in short order.

@gpii-bot

This comment has been minimized.

Copy link
Collaborator

gpii-bot commented Mar 27, 2018

@gpii-bot

This comment has been minimized.

Copy link
Collaborator

gpii-bot commented Mar 27, 2018

@the-t-in-rtf

This comment has been minimized.

Copy link
Member Author

the-t-in-rtf commented Mar 27, 2018

The test failure 30+ minutes ago was at the provisioning stage, where I have not made any changes. A subsequent test run passed.

@amb26

This comment has been minimized.

Copy link
Member

amb26 commented Mar 27, 2018

Thanks @the-t-in-rtf for addressing problem c) above. I see that problem a) is still there - if I manually delete the test-window.exe file from the image, the tests will still pass, which shows that the failure code is somehow not being propagated correctly.

@the-t-in-rtf

This comment has been minimized.

Copy link
Member Author

the-t-in-rtf commented Mar 28, 2018

Yes, I suspect an exit code is not being returned correctly. I will narrow it down and report back.

@the-t-in-rtf

This comment has been minimized.

Copy link
Member Author

the-t-in-rtf commented Mar 28, 2018

OK, I figured out that by "Acceptance Tests", you meant "Unit Tests". I created a test command like the following:

"shouldfail": "node ./tests/UnitTests.js && node node_modules/nyc/bin/nyc.js --help"

npm run shouldfail displays the help message from nyc even when the tests fail, which suggests that the fault is within npm (seems unlikely) or one of the scripts in the chain failing to return the correct exit code. I will run the specific test that should fail and work my way up.

@the-t-in-rtf

This comment has been minimized.

Copy link
Member Author

the-t-in-rtf commented Mar 28, 2018

OK, the gpii/node_modules/processHandling/test/testProcessHandling script itself doesn't seem to return an exit code that would prevent npm from continuing.

@the-t-in-rtf

This comment has been minimized.

Copy link
Member Author

the-t-in-rtf commented Mar 28, 2018

I just confirmed that the problem is that node ./gpii/node_modules/processHandling/test/testProcessHandling.js returns exit code 0 even if the "closeProcessByName" tests fail. Steps to reproduce:

  1. Delete or rename the gpii/node_modules/processHandling/test/test-window.exe executable.
  2. Run node ./gpii/node_modules/processHandling/test/testProcessHandling.js in PowerShell.
  3. Again, in PowerShell, run echo $LastExitCode

If the tests fail, the exit code is 0. If I add a fluid.fail call high in the same file, the exit code is 1.

@the-t-in-rtf

This comment has been minimized.

Copy link
Member Author

the-t-in-rtf commented Mar 28, 2018

If I convert that asyncTest to a test with manual calls to jqUnit.stop and jqUnit.start, the exit code works as expected. Testing with a non-failure now to confirm.

@the-t-in-rtf

This comment has been minimized.

Copy link
Member Author

the-t-in-rtf commented Mar 28, 2018

OK, not I'm not terribly happy with the fix, but after long experimentation, the only way I could get the tests to pass with exit code 0 and fail with exit code 1 was to explicitly call jqUnit.start if the exit code is not as expected. @amb26, please review, I hope this will suffice.

@gpii-bot

This comment has been minimized.

Copy link
Collaborator

gpii-bot commented Mar 28, 2018

@amb26 amb26 merged commit 16c8296 into GPII:master Apr 19, 2018

1 check passed

default Build finished.
Details
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.