-
-
Notifications
You must be signed in to change notification settings - Fork 31.6k
horrible performance of textwrap.wrap() with a long word #66877
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Wrapping a paragraph containing a long word takes a lot of time: $ time python3 -c 'import textwrap; textwrap.wrap("a" * 2 ** 16)' real 3m14.923s A straightforward replacement is 5000 times faster: $ time python3 -c '("".join(x) for x in zip(*[iter("a" * 2 ** 16)] * 70))' real 0m0.053s Tested on Debian with python3.4 3.4.2-1 and python2.7 2.7.8-10. |
This particular case is related to the behavior of the wordsep_re regular expression in worst case. When text contains long sequence of words characters which is not ended by a hypen, or long sequence of non-word and non-space characters (and in some other cases), computational complexity of this regular expression matching is quadratic. This is a peculiarity of current implementation of regular expression engine. May be it is possible to rewrite the regular expression so that quadratic complexity will gone, but this is not so easy. The workaround -- use break_on_hyphens=False. |
May be atomic grouping or possessive quantifiers (bpo-433030) will help with this issue. |
Here is a patch which solves the algorithmic complexity issue by using a different scheme: instead of splitting, match words incrementally. |
Actually, it is enough to change the regexp while still using re.split(). Updated patch attached. |
Unfortunately there are two disadvantages:
$ ./python -m timeit -s 'import textwrap; s = "abcde " * 10**4' -- 'textwrap.wrap(s)' Unpatched: 178 msec per loop First reason stopped me from writing a patch. When change the way how to split words, I suggest to use undocumented re scanner. |
Are you sure? I get the reverse results here (second patch): Unpatched: Patched:
With my second patch, that shouldn't be a problem. |
Oh, sorry, I tested your first patch. Your second patch is faster than current >>> textwrap.wrap('"1a-2b', width=5)
['"1a-', '2b'] With the patch the result is ['"1a-2', 'b']. |
Yes... but in both cases the result is nonsensical, and untested. |
Possessive quantifiers (bpo-433030) is not a panacea. They allow to speed up regular expressions, but the complexity is still quadratic. Antoine's patch makes the complexity linear. |
Current regex produces insane result. $ ./python -c "import textwrap; print(textwrap.wrap('this-is-a-useful-feature', width=1, break_long_words=False))"
['this-', 'is-a', '-useful-', 'feature'] Antoine's regex produces more correct result for this case: ['this-', 'is-', 'a-', 'useful-', 'feature']. But this is not totally correct, one-letter word should not be separated. This can be easy fixed. |
Why not? I guess it depends on English's rules for word splitting, which I don't know. |
I suppose this is common rule in many languages. And current code supports it (there is a special code in the regex to ensure this rule).
But the patch shouldn't add a regression. $ ./python -c "import textwrap; print(textwrap.wrap('this-is-a-useful', width=1, break_long_words=False))" Current code: ['this-', 'is-a-useful'] Just use lookahead assertion to ensure that the hyphen is followed by at least two letters. My previous message is about that current code is not always correct so it is acceptable to replace it with not absolutely equivalent code. |
I frankly don't know about this rule. And the tests don't check for it, so for me it's not broken. |
Tests are not perfect. But this is intentional design. The part of initial
Now it is more complicated. Note '(?=\w{2,})'. |
Here is a patch which is closer to current code but solves complexity issue and also fixes some bugs in current code. $ ./python -c "import textwrap; print(textwrap.wrap('this-is-a-useful-feature', width=1, break_long_words=False))"
['this-', 'is-a', '-useful-', 'feature']
$ ./python -c "import textwrap; print(textwrap.wrap('what-d\x27you-call-it.', width=1, break_long_words=False))"
['what-d', "'you-", 'call-', 'it.'] |
LGTM. |
I don't understand: + expect = ("this-|is-a-useful-|feature-|for-|" Why would "is-a-useful" remain unsplit? It looks like you're making up new rules. |
This is old rule. \w{2,}-(?=\w{2,} -- single letter shouldn't be separated. But there was a bug in such simple regex, it splits a word after non-word character (in particular apostrophe or hyphen) if it followed by word characters and hyphen. There were attempts to fix this bug in bpo-596434 and bpo-965425 but they missed a cases when non-word character is occurred inside a word. Originally I had assigned this issue only to 3.5 because I supposed that the solution needs either new features in re or backward-incompatible changes to word splitting algorithm. But found solution doesn't require 3.5-only features, doesn't change interface, and fixes performance and behavior bugs. So I think it should be applied to maintained releases too. |
I don't agree. This was an implementation detail. There was no test, and it wasn't specified anywhere.
Those don't seem related to single letters between hyphens.
It does change behaviour in ways that could break existing code. The textwrap behaviour is underspecified so it's not ok to assume that previous behaviour was obviously buggy. |
https://owl.english.purdue.edu/owl/resource/576/01/ Rule 8. So, no, in the middle of the word single letters aren't a problem, only at the beginning or the end of the word. |
Thank you David. If splitting single letter surrounded with hyphens is desirable, here is more complicated patch which does this. It deviates from original code more, but it doesn't look break any reasonable example.
Aren't ['this-', 'is-a', '-useful-', 'feature'] and ['what-d', "'you-", 'call-', 'it.'] obvious bugs? |
To clarify, I would be fine with the previous patch if it didn't add the tests. |
Obvious according to which rules? If we want to improve the behaviour of textwrap, IMHO it should be in a separate issue. And someone would have to study the word-wrapping rules of the English language :-) |
What I usually do in cases like this is to add the tests but mark them with comments saying that the tests test current behavior but are not testing parts of the (currently defined) API. That way you know if a change changes behavior and then can decide if that is a problem or not, as opposed to inadvertently changing behavior and only finding out when the bug reports roll in :) But yeah, defining the rules textwrap should follow is a different issue than the performance issue. |
The absent of tests could cause introducing new non-detected bugs and reappearing old bugs.
If you think a word should be splitted before hyphen or apostrophe, there should be some grammatical or typographical reference on the Internet to prove it. I would be fine with moving the fix of textwrap behavior to a separate issue, but what to do with this issue then? We have not a patch which only fixes performance complexity and doesn't change the behavior. |
That's a good idea! |
So what the patch (with mitigated tests) is more preferable? |
Ping. What can I do to move this issue forward? |
wordsplit_3.patch is wordsplit_2.patch with few added comments in tests. Is it enough? |
New changeset 7bd87a219813 by Serhiy Storchaka in branch 'default': |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: