-
Notifications
You must be signed in to change notification settings - Fork 206
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Very slow run time #36
Comments
Hi @Trey314159, By the way, is there somewhere I can download the articles you was using for testing? |
Hi @duydo, Glad to help. Sorry the result is abandoning the current tokenizer, but I'm very glad you are going to pursue a new implementation! I can send you my corpus—it's 5000 Vietnamese Wikipedia articles, with all extra markup removed, and individual lines deduped. It's 3.7MB (1.1MB compressed), so I can email it to you if that's feasible. |
Or—I can upload the compressed corpus here! vi.5K.txt.zip For anyone else who wants to download and use this corpus, since it is a derivative work from Vietnamese Wikipedia articles, it is licensed CC BY-SA 3.0 by me and contributors to Vietnamese Wikipedia. |
Hmm. One last comment: I've also removed extra whitespace—extra blank spaces and blank lines, both of which caused problems for the old plugin/tokenizer. |
@Trey314159 I've just downloaded the corpus, thanks for sharing. It would be great if you can upload the original corpus without removing extra spaces or blank lines for testing the new tokenizer. |
@duydo I don't have the original corpus anymore. I just have a script that downloaded the text of 5000 Wikipedia articles. |
@Trey314159 Ok, I think the corpus you provided is enough for now, I will crawl more if needed. Thanks. Btw, I close this issue, it will be fixed in new tokenizer #37 |
I don't know if you can do anything about it, but in my tests, processing Vietnamese Wikipedia articles in 100-line batches, the analyzer is 30x to 40x slower than the default analyzer.
Processing 5,000 articles (just running the analysis, not full indexing) took ~0:17 for the default analyzer. For vi_analyzer on the same text, it took ~8:05. For comparison, I ran the English analyzer on the same text, and it also took ~0:17. These are on my laptop running on a virtual machine, so the measurements aren't super precise, but I did run several batches of different sizes (100 articles, 1,000 articles, and 5,000 articles) and the differences are in the same 30x-40x range, with smaller batches being comparatively slower. Somewhere in the 3x-5x range for complex analysis might be bearable, but 30x may be too much.
Do you have any way to do profiling to see if there are any obvious speed ups?
[Last issue for today. Sorry for the onslaught of issues. I wanted to share all the stuff I found, because I'd like to try this plugin and see how our users like it! Thanks for all your work on the plugin!]
The text was updated successfully, but these errors were encountered: