Skip to content

Commit

Permalink
Merge pull request #1863 from kbase/DATAUP-233-wdio-travis
Browse files Browse the repository at this point in the history
DATAUP-280 Migrate tests to github actions
  • Loading branch information
briehl committed Oct 29, 2020
2 parents fd0a226 + 1350103 commit abd2004
Show file tree
Hide file tree
Showing 16 changed files with 4,085 additions and 814 deletions.
60 changes: 60 additions & 0 deletions .github/workflows/test.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
name: CI-testing

on:
[push, pull_request]

jobs:
build-and-test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: goanpeca/setup-miniconda@v1
with:
miniconda-version: 'latest'
activate-environment: test-environment
python-version: 3.6.10
auto-activate-base: false
auto-update-conda: true
condarc-file: test/condarc.yml

- name: Use Node JS 10.x
uses: actions/setup-node@v1
with:
node-version: 10.x

- name: Install JS dependencies
run: |
npm ci
npm install bower
./node_modules/bower/bin/bower install
- name: Install Narrative Application
shell: bash -l {0}
run: |
bash ./scripts/install_narrative.sh
grunt minify
sed <src/config.json.templ >src/config.json "s/{{ .Env.CONFIG_ENV }}/dev/"
sed -i 's/{{ if ne .Env.CONFIG_ENV "prod" }} true {{- else }} false {{- end }}/true/' src/config.json
jupyter notebook --version
- name: Run Narrative Backend Tests
shell: bash -l {0}
run: make test-backend

- name: Run Narrative Frontend Unit Tests
shell: bash -l {0}
run: make test-frontend-unit

- name: Run Narrative Frontend Integration Tests
shell: bash -l {0}
env:
KBASE_TEST_TOKEN: ${{ secrets.NARRATIVE_TEST_TOKEN }}
run: make test-integration

- name: Send to Codecov
uses: codecov/codecov-action@v1
with:
file: |
./coverage.xml
./js-coverage/lcov/lcov.info
fail_ci_if_error: true
1 change: 1 addition & 0 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ env:
- TRAVIS_NODE_VERSION="10.10.0"
- secure: "JPdkdywgQbUUixuDuATMXZHWpWImziRoGKd1tCjYd3+0lamLBlldmjhsZ+Bp7ZabKg2ExQZareGDp34cJdOwWBGb2gg0/emjy4UkWkbDH28zyoWNK1SFp6OTY8AeY/icPXKw3MBgKoXG+hemfcxhYc8rUNgUt7V2fA0JSAkCi4w="
- secure: "BrGlbfGLxZYCynh7LeIFyQeiTx4YX6fdMYK2UceVLoAFjOml9mvtJY5i4Oafc2iBDPQ0wAP/eIzZOTcddwc+12o1S0EAq3zZb5z+MOwRKjsiC+BGUXNzAGLaeJqFTC5Sn+vvqsaDGaR3jrdQ8APUL7XapuLUAkof7vqc1SPsn3I="
- secure: "YH4s6huZYW34XFlKDAg7yC8VGUjIiird3gDKOmHyhlPVoQa75tll3iqVBBxNFcj+5CfzlCU+ZZK/JTvRoLJ7+QBPrRm7eZ6+s44JaJK+eAz12AJKdEfZyp+blMQ26Y8uOz5AZQhnRsoYfbqDK3p/T8FW1MAKdvE+8nHDCdAh26k="

cache:
directories:
Expand Down
16 changes: 8 additions & 8 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ build-travis-narrative:
sed -i 's/{{ if ne .Env.CONFIG_ENV "prod" }} true {{- else }} false {{- end }}/true/' src/config.json && \
jupyter notebook --version

test: test-backend test-frontend-unit test-frontend-e2e
test: test-backend test-frontend
@echo "done running backend and frontend test scripts"

# test-backend should use nose, or the like, to test our
Expand All @@ -49,20 +49,20 @@ test-backend:
sh $(BACKEND_TEST_SCRIPT)
@echo "done"

test-frontend:
python test/unit/run_tests.py -u -i

# test-frontend-unit should use karma and jasmine to test
# each of the Javascript components of the Narrative.
# This is achieved through the grunt test invocation
test-frontend-unit:
@echo "running frontend unit tests"
python test/unit/run_tests.py
python test/unit/run_tests.py -u
@echo "done"

# test-frontend-e2e should use Selenium to perform an end-
# to-end test of the front end components, with a running
# Narrative system.
test-frontend-e2e:
@echo "running frontend end-to-end tests"
cd $(FRONTEND_TEST_DIR)
test-integration:
@echo "running integration tests"
python test/unit/run_tests.py -i
@echo "done"

build-docs:
Expand Down
20 changes: 10 additions & 10 deletions docs/adrs/0002-integration-testing-library.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Integration Testing Framework

Date: 2020-09-21
Date: 2020-10-12

Integration testing and end-to-end (e2e) testing is not actively used in this repo. To implement integration/e2e testing, a testing framework needs to be selected and implemented.

Expand All @@ -10,29 +10,30 @@ Integration testing and end-to-end (e2e) testing is not actively used in this re

## Status

Pending
Accepted

## Alternatives Considered

* Selenium WebDriver
* WebDriver.io
* Cypress
* No integration testing

## Decision Outcome

Cypress will be used for integration and e2e testing.
Selenium WebDriver will be used for integration and e2e testing.

## Consequences

KBase developers will have to learn a new testing frame work. There will also be overhead to create and maintain additional tests.
There will be overhead to create and maintain additional tests.

## Pros and Cons of the Alternatives

### Selenium Webdriver
### Webdriver.io

* `+` Other repos in KBase use it
* `+` Has cross browser testing (Chrome, Firefox, Edge, IE, Opera, Safari)
* `+` Supports multiple languages
* `+` Supports iframes
* `-` Steep learning curve for new developers
* `-` Challenging to implement
* `-` Test execution is slow
Expand All @@ -49,7 +50,7 @@ KBase developers will have to learn a new testing frame work. There will also be
* `-` Can only test using JavaScript
* `-` Has limited cross browser testing (Chrome, Edge, Electron, Firefox - Beta)
* `-` KBase developers will have to learn another testing framework
* `-` Limited iframe support
* `-` Limited iframe support (the narrative uses iframes in several locations)

### No integration/e2e testing

Expand All @@ -63,6 +64,5 @@ KBase developers will have to learn a new testing frame work. There will also be
## References

* [Cypress.io](https://www.cypress.io/)
* [Selenium WebDriver](https://www.selenium.dev/documentation/en/webdriver/)
* [Applitools: Cypress vs Selenium WebDriver](https://applitools.com/blog/cypress-vs-selenium-webdriver-better-or-just-different/)
* [BrowserStack: Cypress vs Selenium](https://www.browserstack.com/guide/cypress-vs-selenium)
* [WebDriver.io](https://webdriver.io/)
* [Applitools: Cypress vs Selenium WebDriver](https://applitools.com/blog/cypress-vs-selenium-webdriver-better-or-just-different/)
57 changes: 46 additions & 11 deletions docs/testing.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,8 @@ Then, simply run (from the narrative root directory) `make test`.
This calls a few subcommands, and those can be run independently for specific uses:

- `make test-frontend-unit` will run only the unit tests on the frontend (i.e. those with the Karma runner)
- `make test-integration` will run the frontend integration tests that make use of webdriver.io to simulate the browser on a locally instantiated Narrative, but running against live KBase services. Note that this currently requires an authentication token.
- `make test-frontend` will run both the frontend unit tests and integration tests as above.
- `make test-frontend-e2e` will run only the frontend tests that make use of Selenium to simulate a browser on the real Narrative site.
- `make test-backend` will run only the backend Python tests.

Expand All @@ -53,37 +55,70 @@ You can store auth token files in test/. These are single line files, containing

Next, these credentials need to be referenced for both the back and front end. This version requires two configs - one for Python, and one for JavaScript.

***Python***
#### ***Python***

`src/biokbase/narrative/tests/test.cfg`
In the `[users]` and `[token_files]` blocks, two sets of values are needed: test_user and private_user. They don't have any special permissions, they just need to be different users.

***JavaScript***
#### ***JavaScript***

`test/unit/testConfig.json`
This just needs the path to the token file (with pre-pended slash), such as `"/test/narrativetest.tok"`
`test/testConfig.json`
This just needs the path to the token file, which should be kept in the `test` directory, for example `"test/narrativetest.tok"`.

*TODO (10/24/2017): Unify these token configs!*

### Testing with Travis-CI and Coveralls
#### ***Frontend Integration Tests***

These tests are run (without credentials) automatically on a pull request to the Narrative Github repo. These are currently run through [Travis-CI](https://travis-ci.org/) and the coverage reported with [Coveralls](https://coveralls.io/). There should be nothing you need to do to make this work.
There are currently two options here.
1. Set your token in the `KBASE_TEST_TOKEN` environment variable before running integration tests.
2. Use the same token file as described above.

These are checked in that order. That is, if there's a `KBASE_TEST_TOKEN` variable, that gets used. Otherwise, it checks for the token file referenced in `test/testConfig.json`. If both of those are absent, a fake test token is used, which might cause failures if your tests include authenticated services.

### Testing with Github Actions and Codecov

These tests are run automatically on a pull request to the Narrative Github repo. These are currently run through [Github Actions](https://docs.github.com/en/free-pro-team@latest/actions) and the coverage reported with [Codecov](https://codecov.io/).

Unit tests are automatically run without credentials, skipping various tests that are, really, more like integration tests.

The integration tests that run with webdriver.io do require an authentication token. This is the `NARRATIVE_TEST_TOKEN` Github secret in the Narrative repo. It will become available in the test environment as `KBASE_TEST_TOKEN`, which is the variable that the `wdio.conf.js` file looks for.

### Adding Your Own Tests

***Python***
#### ***Python***

Python tests should be per module, and should all be added to the `src/biokbase/narrative/tests`. The `test.cfg` file there is in INI file format, and should be added to, as necessary.

There are some service client Mocks available using the `mock` library. Check out `test_appmanager.py` for some examples of how these can be used.

***JavaScript***
#### ***JavaScript***

JavaScript tests follow the common Test Spec idiom. Here, we create a new spec file for each JavaScript module. These all live under `test/unit/spec` in roughly the same subdirectory as found under `kbase-extension/static/kbase/js`. There's an example spec in `test/unit/specTemplate.js` - you can just copy this to a new module, and modify to fit your needs.

#### ***Frontend Integration Tests***

Integration tests are done using [webdriver.io](https://webdriver.io). The test scripts are written in JavaScript and all resemble the common Mocha style. These tests are all under `test/integration/spec`. It's helpful for each of these files to include the `wdioUtils.js` module in `test/integration`. For each view that requires authentication (i.e. most of them), be sure to start your test with the async `login` function provided by that module. An example spec file might look like:

```javascript
const Utils = require('../wdioUtils');

describe('Simple test runner', () => {
beforeEach(async () => await Utils.login());

it('opens a narrative', async () => {
await browser.url(Utils.makeURL('narrative/31932'));
const loadingBlocker = await $('#kb-loading-blocker');
const loadingText = await loadingBlocker.getText();
expect(loadingText).toContain('Connecting to KBase services...');
});
});
```

When running these locally, these require an auth token in either the `KBASE_TEST_TOKEN` environment variable, or in the file referenced by `test/testConfig.json` (see the [Add Credentials for Tests - JavaScript](#javascript) section above).

### Manual Testing and Debugging

***Python***
#### ***Python***

For python changes, it will require shutting down the notebook, running `scripts/install_narrative.sh -u` and then starting the notebook server up again with `kbase-narrative`. You can print messages to the terminal using

Expand All @@ -92,9 +127,9 @@ log = logging.getLogger("tornado.application")
log.info("Your Logs Go Here")
```

***JavaScript***
#### ***JavaScript***

It can be useful to immediately see your changes in the narrative. For javascript changes, you will just have to reload the page. You can print messages to the console with `console.log`
It can be useful to immediately see your changes in the narrative. For JavaScript changes, you will just have to reload the page. You can print messages to the console with `console.log`.

To debug using the Karma Debugger complete the following steps:

Expand Down
Loading

0 comments on commit abd2004

Please sign in to comment.