Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Model load time #7

Closed
Prashant118 opened this issue Oct 19, 2020 · 1 comment
Closed

Model load time #7

Prashant118 opened this issue Oct 19, 2020 · 1 comment

Comments

@Prashant118
Copy link

transform() and fit_transform() uses the same time to produce results. If I train the model, save it and load it again, it takes the same time to give predictions. How can I quickly get predictions once I train and save a model?

@MaartenGr
Copy link
Owner

The most time-consuming part of both the transform() and fit_transform() methods is the extraction of BERT-based embeddings. When you have trained the model, you will still need to extract the embeddings for unseen documents. Unfortunately, this means that it is difficult to speed up transform() as it, computationally, mostly relies on extracting the embedding.

Fortunately, in the case of topic modeling, it is unlikely that you will frequently re-train the model as that would result in creating new topics that need to be interpreted again. Often, you will fit_transform() on a large dataset and use transform() for unseen documents, which is faster as the number of unseen documents are likely to be less frequent.

Having said that, it would be nice to be able to swap out BERT-embeddings for perhaps another feature extraction method. This could result in having a much faster application although it could hinder the quality of the generated clusters. Perhaps flair might be an interesting alternative.

Does this answer your question?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants