Tests for spaCy modules and classes live in their own directories of the same name. For example, tests for the
Tokenizer can be found in
/tests/tokenizer. All test modules (i.e. directories) also need to be listed in spaCy's
setup.py. To be interpreted and run, all test files and test functions need to be prefixed with
⚠️Important note: As part of our new model training infrastructure, we've moved all model tests to the
spacy-modelsrepository. This allows us to test the models separately from the core library functionality.
Table of contents
- Running the tests
- Dos and don'ts
- Helpers and utilities
- Contributing to the tests
Running the tests
To show print statements, run the tests with
py.test -s. To abort after the
first failure, run them with
py.test spacy # run basic tests py.test spacy --slow # run basic and slow tests
You can also run tests in a specific file or directory, or even only one specific test:
py.test spacy/tests/tokenizer # run all tests in directory py.test spacy/tests/tokenizer/test_exceptions.py # run all tests in file py.test spacy/tests/tokenizer/test_exceptions.py::test_tokenizer_handles_emoji # run specific test
Dos and don'ts
To keep the behaviour of the tests consistent and predictable, we try to follow a few basic conventions:
- Test names should follow a pattern of
test_[module]_[tested behaviour]. For example:
- If you're testing for a bug reported in a specific issue, always create a regression test. Regression tests should be named
test_issue[ISSUE NUMBER]and live in the
- Only use
@pytest.mark.xfailfor tests that should pass, but currently fail. To test for desired negative behaviour, use
assert notin your test.
- Very extensive tests that take a long time to run should be marked with
@pytest.mark.slow. If your slow test is testing important behaviour, consider adding an additional simpler version.
- If tests require loading the models, they should be added to the
- Before requiring the models, always make sure there is no other way to test the particular behaviour. In a lot of cases, it's sufficient to simply create a
Docobject manually. See the section on helpers and utility functions for more info on this.
- Avoid unnecessary imports. There should never be a need to explicitly import spaCy at the top of a file, and many components are available as fixtures. You should also avoid wildcard imports (
from module import *).
- If you're importing from spaCy, always use absolute imports. For example:
from spacy.language import Language.
- Don't forget the unicode declarations at the top of each file. This way, unicode strings won't have to be prefixed with
- Try to keep the tests readable and concise. Use clear and descriptive variable names (
textare great), keep it short and only test for one behaviour at a time.
If the test cases can be extracted from the test, always
parametrize them instead of hard-coding them into the test:
@pytest.mark.parametrize('text', ["google.com", "spacy.io"]) def test_tokenizer_keep_urls(tokenizer, text): tokens = tokenizer(text) assert len(tokens) == 1
This will run the test once for each
text value. Even if you're only testing one example, it's usually best to specify it as a parameter. This will later make it easier for others to quickly add additional test cases without having to modify the test.
You can also specify parameters as tuples to test with multiple values per test:
@pytest.mark.parametrize('text,length', [("U.S.", 1), ("us.", 2), ("(U.S.", 2)])
To test for combinations of parameters, you can add several
@pytest.mark.parametrize('text', ["A test sentence", "Another sentence"]) @pytest.mark.parametrize('punct', ['.', '!', '?'])
This will run the test with all combinations of the two parameters
punct. Use this feature sparingly, though, as it can easily cause unneccessary or undesired test bloat.
Fixtures to create instances of spaCy objects and other components should only be defined once in the global
conftest.py. We avoid having per-directory conftest files, as this can easily lead to confusion.
These are the main fixtures that are currently available:
||Basic, language-independent tokenizer. Identical to the
||Creates an English, German etc. tokenizer.|
||Creates an instance of the English
The fixtures can be used in all tests by simply setting them as an argument, like this:
def test_module_do_something(en_tokenizer): tokens = en_tokenizer("Some text here")
If all tests in a file require a specific configuration, or use the same complex example, it can be helpful to create a separate fixture. This fixture should be added at the top of each file. Make sure to use descriptive names for these fixtures and don't override any of the global fixtures listed above. From looking at a test, it should immediately be clear which fixtures are used, and where they are coming from.
Helpers and utilities
Our new test setup comes with a few handy utility functions that can be imported from
Doc object manually with
Loading the models is expensive and not necessary if you're not actually testing the model performance. If all you need ia a
Doc object with annotations like heads, POS tags or the dependency parse, you can use
get_doc() to construct it manually.
def test_doc_token_api_strings(en_tokenizer): text = "Give it back! He pleaded." pos = ['VERB', 'PRON', 'PART', 'PUNCT', 'PRON', 'VERB', 'PUNCT'] heads = [0, -1, -2, -3, 1, 0, -1] deps = ['ROOT', 'dobj', 'prt', 'punct', 'nsubj', 'ROOT', 'punct'] tokens = en_tokenizer(text) doc = get_doc(tokens.vocab, [t.text for t in tokens], pos=pos, heads=heads, deps=deps) assert doc.text == 'Give' assert doc.lower_ == 'give' assert doc.pos_ == 'VERB' assert doc.dep_ == 'ROOT'
You can construct a
Doc with the following arguments:
||List of words, for example
||List of heads as integers.|
||List of POS tags as text values.|
||List of tag names as text values.|
||List of dependencies as text values.|
||List of entity tuples with
Here's how to quickly get these values from within spaCy:
doc = nlp(u'Some text here') print([token.head.i-token.i for token in doc]) print([token.tag_ for token in doc]) print([token.pos_ for token in doc]) print([token.dep_ for token in doc]) print([(ent.start, ent.end, ent.label_) for ent in doc.ents])
Note: There's currently no way of setting the serializer data for the parser without loading the models. If this is relevant to your test, constructing the
get_doc() won't work.
||Perform a series of pre-specified transitions, to put the parser in a desired state.|
||Add list of vector tuples (
||Get cosine for two given vectors.|
Contributing to the tests
There's still a long way to go to finally reach 100% test coverage – and we'd appreciate your help!
tests, or make a pull request to this repository.