You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is there a reason why the suggested way to load neuralcoref is through the spacy load method ("nlp = spacy.load('en_coref_md')") instead of through, what appears to be the recommended api for adding extensions as pipeline methods? https://spacy.io/usage/processing-pipelines. I see that neuralcoref is loaded this way in the cli. Doesn't the spacy load method place restrictions on how the neuralcoref library can be used with other vocab/vectors?
Well you can also do this method but here is the story:
The spaCy's instructions are nice for pipelines extensions without training weights. In our case however, it means you have to first download the weights and the extensions (that was the process in the previous versions of NeuralCoref but it's was bit cumbersome for the user) and then you have to load the coref extension and then populates it's weights before adding it to the pipe.
The instructions I give are the simplest to use if you don't need to re-train the model.
If you need to re-train the model you should install neuralcoref from sources and use the spaCy's instructions after it has been trained indeed.
Is there a reason why the suggested way to load neuralcoref is through the spacy load method ("nlp = spacy.load('en_coref_md')") instead of through, what appears to be the recommended api for adding extensions as pipeline methods? https://spacy.io/usage/processing-pipelines. I see that neuralcoref is loaded this way in the cli. Doesn't the spacy load method place restrictions on how the neuralcoref library can be used with other vocab/vectors?
The text was updated successfully, but these errors were encountered: