You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Are you really saying that 8 seconds is too big for 160000 links? And as a note, if you really have a document as big I'm not sure if I would read it 😁
Well, yes, it is slow. Maybe I should elaborate more.
Parsing few MB in seconds is slow on a modern computer.
Much worse, the time grows in much worse fashion (I guess quadratic) then linear here. On my quite fast machine I get these results, with this slightly more generalized example for different N:
And, for comparison, I added times it takes MD4C parser.
N
input size
hoedown
md4c
8
1.2MB
5.859s
0.125s
16
2.5MB
23.531s
0.203s
24
3.8MB
55.000s
0.281s
And no, it's not about whether you read the document or not.
If an application can read the document from any (read: untrusted) source, this document can be used as a DOS attack, even if the application limits document size to e.g. something like 10MB, consuming CPU cycles of your server for long minutes per single request.
Processing many reference definitions/references takes too much time. On my machine:
The text was updated successfully, but these errors were encountered: