-
-
Notifications
You must be signed in to change notification settings - Fork 31.6k
Email Parser use 100% CPU #65647
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Use email.parser to catch the mime's header,when a mime has attachments the process is consuming 100% of the CPU. And It can take until four minutes to finish the header parser. |
Can you provide more details on how to reproduce the problem, please? For example, a sample message and the sequence of python calls you use to parse it. |
I am openning a file and I am passing the File Descriptor to this function The File's size is 12M Thanks. 2014-05-06 16:31 GMT-03:00 R. David Murray <report@bugs.python.org>:
|
Sorry! 2014-05-06 16:51 GMT-03:00 jader fabiano <report@bugs.python.org>:
|
We'll need the data file as well. This is going to be a data-dependent issue. With a 12MB body, I'm guessing there's some decoding pathology involved, which may or may not have been already fixed in python3. To confirm this you could use HeaderParser instead of Parser, which won't try to decode the body. |
No, The file has 12Mb, because It has attachments. I am going to show an You can use a file thus: Date: Tue, May 10:27:17 6 -0300 (BRT) Content-Type: multipart/mixed; boundary=24f59adc-d522-11e3-a531-00265a0f1361 --24f59adc-d522-11e3-a531-00265a0f1361 --24f59a28-d522-11e3-a531-00265a0f1361^M <br/><font color="bpo-00000" face="verdana" size="3">Test example</b> --24f59a28-d522-11e3-a531-00265a0f1361-- --24f59adc-d522-11e3-a531-00265a0f1361 attachment content in base64...... --24f59adc-d522-11e3-a531-00265a0f1361-- 2014-05-06 17:03 GMT-03:00 R. David Murray <report@bugs.python.org>:
|
Sorry, I was using RFC-speak. A message is divided into 'headers' and 'body', and all of the attachments are part of the body in RFC terms. But think of it as 'initial headers' and 'everything else'. Please either attach the full file, and/or try your test using HeaderParser and report the results. However, it occurs to me that the attachments aren't decoded until you retrieve them, so whatever is going on it must be something other than a decoding issue. Nevertheless, Parser actually parses the whole message, attachments included, so we'll need the actual message in order to reproduce this (unless you can reproduce it with a smaller message). |
Also to clarify: HeaderParser will *also* read the entire message, it just won't look for MIME attachments in the 'everything else', it will just treat the 'everything else' as arbitrary data and record it as the payload of the top level Message object. |
Hi. Thanks 2014-05-06 17:25 GMT-03:00 R. David Murray <report@bugs.python.org>:
|
Therefore the bug is that email parser is dramatically slow for abnormal long lines. It has quadratic complexity from line size. Minimal example: import email.parser
import time
data = 'From: example@example.com\n\n' + 'x' * 10000000
start = time.time()
email.parser.Parser().parsestr(data)
print(time.time() - start) |
Parser reads from input file small chunks (8192 churacters) and feed FeedParser which pushes data into BufferedSubFile. In BufferedSubFile.push() chunks of incomplete data are accumulated in a buffer and repeatedly scanned for newlines. Every push() has linear complexity from the size of accumulated buffer, and total complexity is quadratic. Here is a patch which fixes problem with parsing long lines. Feel free to add comments if they are needed (there is an abundance of comments in the module). |
I think the push() code can be a little cleaner. Attaching a revised patch that simplifies push() a bit. |
fix_email_parse.diff is not work when one chunk ends with '\r' and next chunk doesn't start with '\n'. |
Attaching revised patch. I forgot to reapply splitlines. |
Attaching a more extensive test |
fix_email_parse2.diff slightly changes behavior. See my comments on Rietveld. As for fix_prepending2.diff, could you please provide any benchmark results? And there is yet one bug in current code. str.splitlines() splits a string not only breaking it at '\r', '\n' or '\r\n', but at any Unicode line break character (e.g. '\x85', '\u2028', etc). And when a chunk ends with such line break character, it will not break a line. Definitely something should fixed: either lines should be broken only at '\r', '\n' or '\r\n', or other line break characters should be handled correctly when they happen at the end of the chunk. What would you say about this, David? |
No. Inserting at the beginning of a list is always O(n) and inserting at the beginning of a deque is always O(1). |
Yes, but if n is limited, O(n) becomes O(1). In our case n is the number of fed but not read lines. I suppose the worst case is a number of empty lines, in this case n=8192. I tried following microbenchmark and did not noticed significant difference. $ ./python -m timeit -s "from email.parser import Parser; d = 'From: example@example.com\n\n' + '\n' * 100000" -- "Parser().parsestr(d)" |
A deque is typically the right data structure when you need to append, pop, and extend on both the left and right side. It is designed specifically for that task. Also, it nicely cleans-up the code by removing the backwards line list and the list reversal prior to insertion on the left (that's what we had to do to achieve decent performance before the introduction of deques in Python 2.4, now you hardly ever see code like "self._lines[:0] = lines[::-1]"). I think fix_prepending2 would be a nice improvement for Py3.5. For the main patches that directly address the OP's performance issue, feel free to apply either my or yours. They both work. Either way, please add test_parser.diff since the original test didn't cover all the cases and because it didn't make clear the relationship between push() and splitlines(). |
Serhiy: there was an issue with /r/n going across a chunk boundary that was fixed a while back, so there should be a test for that (I hope). As for how to handle line breaks, backward compatibility applies: we have to continue to do what we did before, and it doesn't look like this patch changes that. That is, it sounds like you are saying there is a pre-existing bug that we may want to address? In which case it should presumably be a separate issue. |
Should this be categorized as a security issue? You could easily DoS a server with that (email.parser is used by http.client to parse HTTP headers, it seems). |
I think it makes sense to treat this as a security issue. I don't have a preference about whether to use Serhiy's email_parser_long_lines.patch or my fix_email_parse2.diff |
I found a bug in my patch. Following code from email.parser import Parser
BLOCKSIZE = 8192
s = 'From: <e@example.com>\nFoo: '
s += 'x' * ((-len(s) - 1) % BLOCKSIZE) + '\rBar: '
s += 'y' * ((-len(s) - 1) % BLOCKSIZE) + '\x85Baz: '
s += 'z' * ((-len(s) - 1) % BLOCKSIZE) + '\n\n'
print(Parser().parsestr(s).keys()) outputs ['From', 'Foo', 'Bar', 'Baz'] on current code and ['From', 'Foo', 'Bar'] with my patch. Neither current code, nor Reimonds patch are not affected by similar bugs. It is possible to fix my patch, but then it will become too complicated and slower. I have one doubt about one special case with Raymond's patch, but looking at current code on highter level, this doesn't matter. Current code in FeedParser in any case is not very efficient and smoothes out any implementation details in BufferedSubFile. That is why fix_prepending2.diff has no visible effect on email parsing. I'll provided additional tests which cover current issue and a bug in my patch.
I can't create an example. May be higher level code is tolerant to it. I'll created separate issue if found an example.
Yes, but not very important. You need send tens or hundreds of megabytes to hang a server more than a second. |
Here is a patch which combines fixed Raymond's patch and FeedParser tests. These tests cover this issue, a bug in my patch, and (surprisingly) a bug in Raymond's patch. I didn't include Raymond's test because looks as it doesn't catch any bug. If there are no objections, I'll commit this patch. |
The test_parser.diff file catches the bug in fix_email_parse.diff and it provides some assurance that push() functions as an incremental version of str.splitlines(). I would like to have this test included. It does some good and does no harm. |
New changeset ba90bd01c5f1 by Serhiy Storchaka in branch '2.7': New changeset 1b1f92e39462 by Serhiy Storchaka in branch '3.4': New changeset f296d7d82675 by Serhiy Storchaka in branch 'default': |
I don't see this. But well, it does no harm. Please commit fix_prepending2.diff yourself. |
New changeset 71cb8f605f77 by Serhiy Storchaka in branch '2.7': New changeset c19d3465965f by Serhiy Storchaka in branch '3.4': New changeset f07b17de3b0d by Serhiy Storchaka in branch 'default': |
Raymond, are you gong to apply the deque patch (maybe after doing performance measurement) or should we close this? |
New changeset 830bcf4fb29b by Raymond Hettinger in branch 'default': |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: