Skip to content
πŸ“‚ Additional lookup tables and data resources for spaCy
Python
Branch: master
Clone or download
Latest commit 32ad37c Oct 3, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
spacy_lookups_data Increment version Oct 2, 2019
.gitignore
.travis.yml Create .travis.yml Sep 30, 2019
LICENSE Initial commit Sep 30, 2019
MANIFEST.in Also include README Oct 1, 2019
README.md Update README.md [ci skip] Oct 3, 2019
azure-pipelines.yml Update azure-pipelines.yml Sep 30, 2019
requirements.txt
setup.cfg Move srsly and pathlib to setup_requires Oct 1, 2019
setup.py Initial commit Sep 30, 2019

README.md

spaCy lookups data

This repository contains additional data files to be used with spaCy v2.2+. When it's installed in the same environment as spaCy, this package makes the resources for each language available as an entry point, which spaCy checks when setting up the Vocab and Lookups.

Feel free to submit pull requests to update the data. For issues related to the data, lookups and integration, please use the spaCy issue tracker.

Azure Pipelines Travis Build Status Current Release Version pypi Version conda Version

FAQ

Why does this exist?

The main purpose of this package is to make the default spaCy installation smaller and not force every user to download large data files for all languages by default. Lookups data is now either provided via the pre-trained models (which serialize out their vocabulary and lookup tables) or by explicitly installing this package or spacy[lookups].

When should I install this?

You should install this package if you want to use lemmatization for languages that don't yet have a pretrained model available for download and don't rely on third-party libraries for lemmatization – for example, Turkish, Swedish or Croatian (see data files). You should also install it if you're creating a blank model and want it to include lemmatization data. Once you've saved out the model (e.g. via nlp.disk), it will include the lookup tables as part of its Vocab.

Is this package only for lemmatization?

At the moment, yes. However, we are considering including other lookup lists and tables as well, e.g. large tokenizer exception files.

Running tests

This package now also includes all data-specific tests. The test suite depends on spaCy.

pip install -r requirements.txt
python -m pytest spacy_lookups_data

If you've installed the package in your spaCy environment, you can also run the tests like this:

python -m pytest --pyargs spacy_lookups_data
You can’t perform that action at this time.