Testing

Lev Eliezer Israel edited this page May 17, 2016 · 12 revisions

There are three testing suites at work in the Sefaria code:

  • pytest for unit tests
  • standard UnitTest, with some Django tools, for request/response level API tests.
  • front end tests, in a bespoke framework that uses Selenium and Sauce Labs / Browserstack

The wheel script test.sh runs both of the first two suites of tests, and currently takes about 5 minutes or so to do its work. A deeper tests can also be run from the command line, and takes closer to 15 minutes to finish.

Test Database

Tests run against a separate Mongo database - named following the current database name followed by the string '_test'. If that database isn't present, it will be created. Currently the database doesn't update itself at all following initial creation. To update the test database from your development database:

>>> import sefaria.system.database as d
>>> d.refresh_test()

Pytest

We're using pytest for unit testing.

Running Tests

To run tests: from the Sefaria-Project directory, run py.test -m 'not deep'. Pytest will discover tests in any subdirectory of the application. It will run the tests, and give detailed information about any test failure. There are a handful of tests that are particularly long running. These are marked as 'deep' tests, and are only run if py.test is invoked with a -m 'deep' option or without the -m 'not deep' option.

Where to put tests

Tests should be put in a directory called 'tests', a subdirectory of the area being tested. The Sefaria-Project/sefaria directory has tests in Sefaria-Project/sefaria/tests.

Writing Tests

Ideally, every unit of code should have tests written against it. Some say that you should write the tests first, as a sort of documentation, and then write code to satisfy them. We're in a gradual process of writing tests - as we change areas of code, we should write tests to make sure that our changes don't break current behavior. If a test takes a long time to run, and much of the functionality is tested in other tests, the test function or class should be decorated with a 'deep' marker. @pytest.mark.deep

Basic pytest tests use assert to check expected values against returned values.

See the tests in Sefaria-Project/sefaria/tests, and use them as examples.

UnitTest

We use UnitTest with some Django classes, particularly TestCase and Client for API level testing.

Running Tests

API tests currently are all located in the Reader app, in sefaria/reader/tests.py. Tests are invoked from the command line using python manage.py test reader

Frontend Tests

Located in /reader/browsertest

You can write your testing class into /reader/browsertest/basic_tests.py and then use the run_one_local.py script to test it (on chrome). The script takes the class name as its only arg. For instance: python run_one_local.py ClickVersionedSearchResultDesktop

Remote tests are run with the run_tests.py script - soon to be automatically with the deploy flow, from test.sefaria.org.

Some notes on Debugging Selenium Tests

Next Steps

  • Integrate testing into our checkin and deployment flows.

Testing Standards

Pure Functions

(and functions that behave like pure functions, given the stability of our data set)

Unit tests (py.test)

Tests should cover all categories of potential input

Object Models

Model consistency tests (py.test)

For each model, on these actions, test for these results:

  • Create
    • no error
    • normalization
    • validation
    • consistency
    • cascade
  • Load/Read
    • no error
    • consistency
  • Update
    • normalization
    • validation
    • cascade
  • Destroy
    • complete removal
    • cascade

API GET endpoints

(UnitTest)

Similar to pure functions, test should cover all categories of potential request/response

API POST endpoints

Model consistency tests (UnitTest)

Similar to model tests, API POST tests need to validate all intended consequences. Additionally, APIs need to be tested against purposefully bad data.

According to the API - test:

  • Create
    • rejection of bad data
    • no error
    • normalization
    • validation
    • consistency
    • cascade
  • Update
    • rejection of bad data
    • normalization
    • validation
    • cascade
  • Destroy
    • rejection of bad data
    • complete removal
    • cascade

Frontend