-
-
Notifications
You must be signed in to change notification settings - Fork 4.6k
Closed
Labels
bugBugs and behaviour differing from documentationBugs and behaviour differing from documentationcompatCross-platform and cross-Python compatibilityCross-platform and cross-Python compatibilityfeat / tokenizerFeature: TokenizerFeature: Tokenizerhelp wantedContributions welcome!Contributions welcome!upgradeIssues related to upgrading spaCyIssues related to upgrading spaCy
Description
How to reproduce the behaviour
I found a bug where tokenization is completely not working with version 2.1.0a10 on python 2.7. I have reproduced this on three of my machines.
$ conda create -n py27_spacy2 python=2.7
$ source activate py27_spacy2
$ pip install -U spacy-nightly
$ python -m spacy download en_core_web_sm
$ python -c "import spacy; nlp=spacy.load('en_core_web_sm'); doc=nlp(u'hello world'); print ','.join([t.text for t in doc])"
h,e,ll,o,w,o,r,l,d
Your Environment
- Operating System: Ubuntu
- Python Version Used: 2.7
- spaCy Version Used: 2.1.0a10
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
bugBugs and behaviour differing from documentationBugs and behaviour differing from documentationcompatCross-platform and cross-Python compatibilityCross-platform and cross-Python compatibilityfeat / tokenizerFeature: TokenizerFeature: Tokenizerhelp wantedContributions welcome!Contributions welcome!upgradeIssues related to upgrading spaCyIssues related to upgrading spaCy