Be notified of new releases
Create your free GitHub account today to subscribe to this repository for new releases and build software alongside 28 million developers.Sign up
We're very excited to finally introduce spaCy v2.0. The new version gets spaCy up to date with the latest deep learning technologies and makes it much easier to run spaCy in scalable cloud computing workflows. We've fixed over 60 bugs (every open bug!), including several long-standing issues, trained 13 neural network models for 7+ languages and added alpha tokenization support for 8 new languages. We also re-wrote almost all of the usage guides, API docs and code examples.
pip install -U spacy
conda install -c conda-forge spacy
✨ Major features and improvements
- NEW: Convolutional neural network models for English, German, Spanish, Portuguese, French, Italian, Dutch and multi-language NER. Substantial improvements in accuracy over the v1.x models.
Vectorsclass for managing word vectors, plus trainable document vectors and contextual similarity via convolutional neural networks.
- NEW: Custom processing pipeline components and extension attributes on the
- NEW: Built-in, trainable text classification pipeline component.
- NEW: Built-in displaCy visualizers for dependencies and entities, with Jupyter notebook support.
- NEW: Alpha tokenization for Danish, Polish, Indonesian, Thai, Hindi, Irish, Turkish, Croatian and Romanian.
- Improved language data, support for lazy loading and simple, lookup-based lemmatization for English, German, French, Spanish, Italian, Hungarian, Portuguese and Swedish.
- Support for multi-language models and new
- Strings are now resolved to hash values, instead of mapped to integer IDs. This means that the string-to-int mapping no longer depends on the vocabulary state.
- Improved and consistent saving, loading and serialization across objects, plus Pickle support.
PhraseMatcherfor matching large terminology lists as
Docobjects, plus revised
- New CLI commands
evaluate, plus entry point for
spacycommand to use instead of
python -m spacy.
- Experimental GPU support via Chainer's CuPy module.
spaCy v2.0 comes with 13 new convolutional neural network models for 7+ languages. The models have been designed and implemented from scratch specifically for spaCy. A novel bloom embedding strategy with subword features is used to support huge vocabularies in tiny tables.
core models include part-of-speech tags, dependency labels and named entities. Small models include only context-specific token vectors, while medium-sized and large models ship with word vectors. For more details, see the models directory or try our new model comparison tool.
||English||Tagger, parser, entities||35 MB|
||English||Tagger, parser, entities, vectors||115 MB|
||English||Tagger, parser, entities, vectors||812 MB|
||German||Tagger, parser, entities||36 MB|
||Spanish||Tagger, parser, entities||35 MB|
||Spanish||Tagger, parser, entities, vectors||93 MB|
||Portuguese||Tagger, parser, entities||36 MB|
||French||Tagger, parser, entities||37 MB|
||French||Tagger, parser, entities, vectors||106 MB|
||Italian||Tagger, parser, entities||34 MB|
||Dutch||Tagger, parser, entities||34 MB|
You can download a model by using its name or shortcut. To load a model, use
spacy.load(), or import it as a module and call its
spacy download en_core_web_sm
import spacy nlp = spacy.load('en_core_web_sm') import en_core_web_sm nlp = en_core_web_sm.load()
spaCy v2.0's new neural network models bring significant improvements in accuracy, especially for English Named Entity Recognition. The new
en_core_web_lg model makes about 25% fewer mistakes than the corresponding v1.x model and is within 1% of the current state-of-the-art (Strubell et al., 2017). The v2.0 models are also cheaper to run at scale, as they require under 1 GB of memory per process.
🔴 Bug fixes
- Fix issue #125, #228, #299, #377, #460, #606, #930: Add full Pickle support.
- Fix issue #152, #264, #322, #343, #437, #514, #636, #785, #927, #985, #992, #1011: Fix and improve serialization and deserialization of
- Fix issue #285, #1225: Fix memory growth problem when streaming data.
- Fix issue #512: Improve parser to prevent it from returning two
- Fix issue #519, #611, #725: Retrain German model with better tokenized input.
- Fix issue #524: Improve parser and handling of noun chunks.
- Fix issue #621: Prevent double spaces from changing the parser result.
- Fix issue #664, #999, #1026: Fix bugs that would prevent loading trained NER models.
- Fix issue #671, #809, #856: Fix importing and loading of word vectors.
- Fix issue #683, #1052, #1442: Don't require tag maps to provide
- Fix issue #753: Resolve bug that would tag OOV items as personal pronouns.
- Fix issue #860, #956, #1085, #1381: Allow custom attribute extensions on
- Fix issue #905, #954, #1021, #1040, #1042: Improve parsing model and allow faster accuracy updates.
- Fix issue #933, #977, #1406: Update online demos.
- Fix issue #995: Improve punctuation rules for Hebrew and other non-latin languages.
- Fix issue #1008:
traincommand finally works correctly if used without
- Fix issue #1012: Improve word vectors documentation.
- Fix issue #1043: Improve NER models and allow faster accuracy updates.
- Fix issue #1044: Fix bugs in French model and improve performance.
- Fix issue #1051: Improve error messages if functionality needs a model to be installed.
- Fix issue #1071: Correct typo of "whereve" in English tokenizer exceptions.
- Fix issue #1088: Emoji are now split into separate tokens wherever possible.
- Fix issue #1240: Allow merging
Spans without keyword arguments.
- Fix issue #1243: Resolve undefined names in deprecated functions.
- Fix issue #1250: Fix caching bug that would cause tokenizer to ignore special case rules after first parse.
- Fix issue #1257: Ensure the compare operator
==works as expected on tokens.
- Fix issue #1291: Improve documentation of training format.
- Fix issue #1336: Fix bug that caused inconsistencies in NER results.
- Fix issue #1375: Make sure
- Fix issue #1450: Fix error when OP quantifier
"*"ends the match pattern.
- Fix issue #1452: Fix bug that would mutate the original text.
📖 Documentation and examples
- NEW: Completely rewritten, reorganised and redesigned usage and API docs, plus models directory and model comparison tool.
- NEW: spacy 101 guide with simple explanations and illustrations of the most important concepts and an overview of spaCy's features and capabilities.
- Documentation on custom processing pipelines, visualizers, detailed training tutorials and improved guides on word vectors and rule-based matching.
- Updated code examples for training, information extraction and pipeline management.
⚠️ Backwards incompatibilities
For the complete table and more details, see the guide on what's new in v2.0.
Note that the old v1.x models are not compatible with spaCy v2.0.0. If you've trained your own models, you'll have to re-train them to be able to use them with the new version. For a full overview of changes in v2.0, see the documentation and guide on migrating from spaCy 1.x.
Language.pipe method allows spaCy to batch documents, which brings a significant performance advantage in v2.0. The new neural networks introduce some overhead per batch, so if you're processing a number of documents in a row, you should use
nlp.pipe and process the texts as a stream.
docs = nlp.pipe(texts) # BAD: docs = (nlp(text) for text in texts)
To make usage easier, there's now a boolean
as_tuples keyword argument, that lets you pass in an iterator of
(text, context) pairs, so you can get back an iterator of
(doc, context) tuples.
spacy.load() is now only intended for loading models – if you need an empty language class, import it directly instead, e.g.
from spacy.lang.en import English. If the model you're loading is a shortcut link or package name, spaCy will expect it to be a model package, import it and call its
load() method. If you supply a path, spaCy will expect it to be a model data directory and use the meta.json to initialise a language class and call
nlp.from_disk() with the data path.
nlp = spacy.load('en') nlp = spacy.load('en_core_web_sm') nlp = spacy.load('/model-data') nlp = English().from.disk('/model-data') # OLD: nlp = spacy.load('en', path='/model-data')
All built-in pipeline components are now subclasses of
Pipe, fully trainable and serializable, and follow the same API. Instead of updating the model and telling spaCy when to stop, you can now explicitly call
begin_training, which returns an optimizer you can pass into the
update function. While
update still accepts sequences of
GoldParse objects, you can now also pass in a list of strings and dictionaries describing the annotations. This is the recommended usage, as it removes one layer of abstraction from the training.
optimizer = nlp.begin_training() for itn in range(1000): for texts, annotations in train_data: nlp.update(texts, annotations, sgd=optimizer) nlp.to_disk('/model')
spaCy's serialization API is now consistent across objects. All containers and pipeline components have
nlp.to_disk('/model') nlp.vocab.to_disk('/vocab') # OLD: nlp.save_to_directory('/model')
Processing pipelines and attribute extensions
Models can now define their own processing pipelines as a list of strings, mapping to component names. Components receive a
Doc, modify it and return it to be processed by the next component in the pipeline. You can add custom components to
nlp.pipeline and create extensions to add custom attributes, properties and methods to the
nlp = spacy.load('en') my_component = MyComponent() nlp.add_pipe(my_component, before='tagger') Doc.set_extension('my_attr', default=True) doc = nlp(u"This is a text.") assert doc._.my_attr
This release is brought to you by @honnibal and @ines. Thanks to @Gregory-Howard, @luvogels, @Ferdous-Al-Imran, @uetchy, @akYoung, @kengz, @raphael0202, @ardeego, @yuvalpinter, @dvsrepo, @frascuchon, @oroszgy, @v3t3a, @Tpt, @thinline72, @jarle, @jimregan, @nkruglikov, @delirious-lettuce, @geovedi, @wannaphongcom, @h4iku, @IamJeffG, @binishkaspar, @ramananbalakrishnan, @jerbob92, @mayukh18, @abhi18av and @uwol for the pull requests and contributions. Also thanks to everyone who submitted bug reports and took the spaCy user survey – your feedback made a big difference!