Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Very slow run time #36

Closed
Trey314159 opened this issue Aug 4, 2017 · 7 comments
Closed

Very slow run time #36

Trey314159 opened this issue Aug 4, 2017 · 7 comments

Comments

@Trey314159
Copy link

I don't know if you can do anything about it, but in my tests, processing Vietnamese Wikipedia articles in 100-line batches, the analyzer is 30x to 40x slower than the default analyzer.

Processing 5,000 articles (just running the analysis, not full indexing) took ~0:17 for the default analyzer. For vi_analyzer on the same text, it took ~8:05. For comparison, I ran the English analyzer on the same text, and it also took ~0:17. These are on my laptop running on a virtual machine, so the measurements aren't super precise, but I did run several batches of different sizes (100 articles, 1,000 articles, and 5,000 articles) and the differences are in the same 30x-40x range, with smaller batches being comparatively slower. Somewhere in the 3x-5x range for complex analysis might be bearable, but 30x may be too much.

Do you have any way to do profiling to see if there are any obvious speed ups?

[Last issue for today. Sorry for the onslaught of issues. I wanted to share all the stuff I found, because I'd like to try this plugin and see how our users like it! Thanks for all your work on the plugin!]

@duydo
Copy link
Owner

duydo commented Aug 11, 2017

Hi @Trey314159,
Thank you for spending time doing analysis with the plugin, I really appreciate what you've contributed to it.
After looking at all the issues you reported and reviewing whole source code of the original tokenizer I decided to stop hacking it, I will implement a new Vietnamese word segmentation from the scratch based on this paper and your suggestions.

By the way, is there somewhere I can download the articles you was using for testing?

@Trey314159
Copy link
Author

Hi @duydo,

Glad to help. Sorry the result is abandoning the current tokenizer, but I'm very glad you are going to pursue a new implementation!

I can send you my corpus—it's 5000 Vietnamese Wikipedia articles, with all extra markup removed, and individual lines deduped. It's 3.7MB (1.1MB compressed), so I can email it to you if that's feasible.

@Trey314159
Copy link
Author

Or—I can upload the compressed corpus here! vi.5K.txt.zip

For anyone else who wants to download and use this corpus, since it is a derivative work from Vietnamese Wikipedia articles, it is licensed CC BY-SA 3.0 by me and contributors to Vietnamese Wikipedia.

@Trey314159
Copy link
Author

Hmm. One last comment: I've also removed extra whitespace—extra blank spaces and blank lines, both of which caused problems for the old plugin/tokenizer.

@duydo
Copy link
Owner

duydo commented Aug 11, 2017

@Trey314159 I've just downloaded the corpus, thanks for sharing. It would be great if you can upload the original corpus without removing extra spaces or blank lines for testing the new tokenizer.

@Trey314159
Copy link
Author

@duydo I don't have the original corpus anymore. I just have a script that downloaded the text of 5000 Wikipedia articles.

@duydo
Copy link
Owner

duydo commented Aug 16, 2017

@Trey314159 Ok, I think the corpus you provided is enough for now, I will crawl more if needed. Thanks.

Btw, I close this issue, it will be fixed in new tokenizer #37

@duydo duydo closed this as completed Aug 16, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants