You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
transform() and fit_transform() uses the same time to produce results. If I train the model, save it and load it again, it takes the same time to give predictions. How can I quickly get predictions once I train and save a model?
The text was updated successfully, but these errors were encountered:
The most time-consuming part of both the transform() and fit_transform() methods is the extraction of BERT-based embeddings. When you have trained the model, you will still need to extract the embeddings for unseen documents. Unfortunately, this means that it is difficult to speed up transform() as it, computationally, mostly relies on extracting the embedding.
Fortunately, in the case of topic modeling, it is unlikely that you will frequently re-train the model as that would result in creating new topics that need to be interpreted again. Often, you will fit_transform() on a large dataset and use transform() for unseen documents, which is faster as the number of unseen documents are likely to be less frequent.
Having said that, it would be nice to be able to swap out BERT-embeddings for perhaps another feature extraction method. This could result in having a much faster application although it could hinder the quality of the generated clusters. Perhaps flair might be an interesting alternative.
transform() and fit_transform() uses the same time to produce results. If I train the model, save it and load it again, it takes the same time to give predictions. How can I quickly get predictions once I train and save a model?
The text was updated successfully, but these errors were encountered: