Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Option to pass Dictionary.docFreq to TfidfModel.__init__() #8

Closed
emsrc opened this issue Mar 2, 2011 · 4 comments
Closed

Option to pass Dictionary.docFreq to TfidfModel.__init__() #8

emsrc opened this issue Mar 2, 2011 · 4 comments

Comments

@emsrc
Copy link

emsrc commented Mar 2, 2011

TfidfModel.initialize() calculates document frequencies for tokens. However, these are also calculated when creating a Dictionary object. TfidfModel.init can therefore take a keyword arg that allows providing Dictionary.docFreq and prevents recalculating document frequencies.

@piskvorky
Copy link
Owner

Ok, I added the option to initialize TfidfModel via an existing Dictionary object, commit c65b3ff .

This can help if a) we have the dictionary in the first place (the corpus was constructed through dictionary.doc2bow) and b) corpus iteration is slow, so the one extra pass that we save this way matters.

I also PEP8-fied the tfidf code while i was at it.

@Dieterbe
Copy link
Contributor

Dieterbe commented Mar 4, 2011

I find it interesting that your commit message implies this change is only beneficial "if your corpus is super slow".
I would think building your idfs from the dict dfs will almost always be faster then iterating over the corpus. even if iterating over the corpus is not i/o constrained (reading from disk, or over network, as you suggest), the corpus approach has some overhead (it retrieves full documents which may need to be converted to the BOW model, it needs to do more math, etc) whereas the dict approach has the numbers it needs right away.

Either way, I just did one testrun, and I'm not seeing a noticeable speedup yet, which suprises me...
edit: after a few more testruns, I actually think I see a speedup :) about 20-50% faster (hard to tell precisely because there is a big spread in my timings)
edit2 : oops my previous testruns were not correctly executed. I now see about x100 speedup in tfidf model building

@piskvorky
Copy link
Owner

Yeah it will be always faster, no question about it.

That comment meant that in a broader perspective (computing similarities, LSI, LDA, ...), the one extra pass that increments a few number is negligible. Unless the pass (corpus iteration) itself is very expensive -- then it matters. In some cases, it might matter a lot, which in my opinion outweighs the added complexity of the code; that's why I accepted this feature.

@Dieterbe
Copy link
Contributor

Dieterbe commented Mar 4, 2011

Aha. I see. Thanks.

This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants