Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About Coherence of topic models #90

Open
nadiafelix opened this issue Apr 8, 2021 · 79 comments
Open

About Coherence of topic models #90

nadiafelix opened this issue Apr 8, 2021 · 79 comments

Comments

@nadiafelix
Copy link

nadiafelix commented Apr 8, 2021

Currently, I am calculating the Coherence of a bertopic model using the gensim. For this I need the n_grams from each text of the corpus. Is it possible? The function used by gensim waits for the corpus and topics, and the topics are tokens that must exist in corpus.

cm = CoherenceModel(topics, corpus, dictionary, coherence='u_mass')

Thanks in advance.

@MaartenGr
Copy link
Owner

I believe you should be using the CountVectorizer for creating the corresponding corpus and dictionary when creating the CoherenceModel.

@nadiafelix
Copy link
Author

@MaartenGr thanks a lot for you attention. I am trying this. But I found a sentence in topics set that doesn't exist in dictionary. Is it ok? Do all the topics exist in ngrams?

The used code is this:

from gensim import corpora
import nltk
nltk.download('punkt')
from gensim.models.coherencemodel import CoherenceModel

from sklearn.feature_extraction.text import CountVectorizer

cv = CountVectorizer(ngram_range=(2, 20)) #2,20 is the same range of topics
cv_fit=cv.fit_transform(comentariosList)

texts = []

for i in range(len(comentariosList)):
temp = np.array(cv.inverse_transform(cv_fit.getrow(i))).tolist()
texts = texts + temp

topics = topics_df['Keywords'].values.tolist()

cm = CoherenceModel(topics=topics, corpus=corpus, dictionary=dictionary, coherence='u_mass')
cm.get_coherence_per_topic()

Thanks for your help.

@MaartenGr
Copy link
Owner

You should focus on what you put into the corpus and dictionary variables as the topics are checked against those two. At the moment, I cannot see how you have constructed them but I would advise you to look into those.

@nadiafelix
Copy link
Author

Do you have any recommendations for working with this n_gram_range parameter?

topic_model = BERTopic (verbose = True, embedding_model = embedder, n_gram_range = (1,3), calculate_probabilities = True)

@MaartenGr
Copy link
Owner

I believe it is best to make sure that the Countvectorizer in Bertopic should be the same as you used to create the dictionary, corpus and tokens.

You could also try accessing the Countvectorizer directly in Bertopic by using model.vectorizer_model. That way, you do not have to create different instances that might not match exactly.

If this still does not work let me know!

@Amine-OMRI
Copy link

corpus

I would suggest that instead of creating n_grams of the corpus, you can simply split the n_grams of the topics and flatten them to have a list of single words (unigram) so that you can perform gensim CoherenceNPM scores without having to create the n_grams of text.

@nadiafelix
Copy link
Author

nadiafelix commented Apr 14, 2021

I believe it is best to make sure that the Countvectorizer in Bertopic should be the same as you used to create the dictionary, corpus and tokens.

You could also try accessing the Countvectorizer directly in Bertopic by using model.vectorizer_model. That way, you do not have to create different instances that might not match exactly.

If this still does not work let me know!

First of all, Thank you for your attention.
When I try to use the vectorizer_model from Bertopic we have this error:

1 corpus = ['This is the first document.','This document is the second document.','And this is the third one.','Is this the first document?',]
----> 2 cv = topic_model.vectorizer_model()
3
4 X = cv.fit_transform(corpus)

TypeError: 'CountVectorizer' object is not callable

@nadiafelix
Copy link
Author

corpus

I would suggest that instead of creating n_grams of the corpus, you can simply split the n_grams of the topics and flatten them to have a list of single words (unigram) so that you can perform gensim CoherenceNPM scores without having to create the n_grams of text.

Hi Amine-OMI, thank you for your tips. Do you have some example of gensim CoherenceNPM?

Thanks a lot for your attention.

@MaartenGr
Copy link
Owner

You should access the vectorizer model like this: cv = topic_model.vectorizer_model. Since it is already fitted you can use something like cv.get_feature_names() and tokenizer = cv.build_tokenizer() to get the words and tokenizer used for constructing the dictionary and corpus.

@Viole-Grace
Copy link

Viole-Grace commented Apr 15, 2021

I believe it is best to make sure that the Countvectorizer in Bertopic should be the same as you used to create the dictionary, corpus and tokens.
You could also try accessing the Countvectorizer directly in Bertopic by using model.vectorizer_model. That way, you do not have to create different instances that might not match exactly.
If this still does not work let me know!

First of all, Thank you for your attention.
When I try to use the vectorizer_model from Bertopic we have this error:

1 corpus = ['This is the first document.','This document is the second document.','And this is the third one.','Is this the first document?',]
----> 2 cv = topic_model.vectorizer_model()
3
4 X = cv.fit_transform(corpus)

TypeError: 'CountVectorizer' object is not callable

Hey! Use it as such:

cv = topic_model.vectorizer_model
X = cv.fit_transform(docs)
doc_tokens = [text.split(" ") for text in docs]

import gensim.corpora as corpora
id2word = corpora.Dictionary(doc_tokens)
texts = doc_tokens
corpus = [id2word.doc2bow(text) for text in texts]

topic_words = []
for i in range(len(topic_model.get_topic_freq())-1):
  interim = []
  interim = [t[0] for t in topic_model.get_topic(i)]
  topic_words.append(interim)

from gensim.models.coherencemodel import CoherenceModel

coherence_model = CoherenceModel(topics=topic_words, texts=texts, corpus=corpus, dictionary=id2word, coherence='c_v')
coherence_model.get_coherence()

@Amine-OMRI
Copy link

Amine-OMRI commented Apr 15, 2021

corpus

I would suggest that instead of creating n_grams of the corpus, you can simply split the n_grams of the topics and flatten them to have a list of single words (unigram) so that you can perform gensim CoherenceNPM scores without having to create the n_grams of text.

Hi Amine-OMI, thank you for your tips. Do you have some example of gensim CoherenceNPM?

Thanks a lot for your attention.

Hey, sorry for the late reply, here's the process if you're still working on it:

Once you have extracted the topics from the corpus, you may have bigrams in the list of top words of each topic, so you need to split them and flatten the list to get a list of unigrams at the end.

After that you can use Gensime Topic coherence as described in this link

And you can use one of the following coherence measures: {'u_mass', 'c_v', 'c_uci', 'c_npmi'}.

from gensim.models.coherencemodel import CoherenceModel
from gensim.corpora.dictionary import Dictionary
# Creat the dictionary of the input corpus
id2word = Dictionary(corpus)
npmi = CoherenceModel(texts=corpus, dictionary=id2word,
                       topics=flatten_unigrams, coherence='c_v')
print(npmi.get_coherence())

I hope this helps you

@MaartenGr
Copy link
Owner

The following steps should be the correct ones in calculating the coherence scores. Some additional preprocessing is necessary since there is a very small part of that in BERTopic. Also, make sure to build the tokens with the exact same tokenizer as used in BERTopic.

I do want to stress that metrics such as c_v and c_npmi are merely proxies for a topic model's performance. They are by no means a ground truth and can have significant issues (e.g., sensitive to the number of words in a topic). So whether you find a low or high score, I would advise you to look at the topics yourself and see if they make sense to you.

import gensim.corpora as corpora
from gensim.models.coherencemodel import CoherenceModel

# Preprocess documents
cleaned_docs = topic_model._preprocess_text(docs)

# Extract vectorizer and tokenizer from BERTopic
vectorizer = topic_model.vectorizer_model
tokenizer = vectorizer.build_tokenizer()

# Extract features for Topic Coherence evaluation
words = vectorizer.get_feature_names()
tokens = [tokenizer(doc) for doc in cleaned_docs]
dictionary = corpora.Dictionary(tokens)
corpus = [dictionary.doc2bow(token) for token in tokens]
topic_words = [[words for words, _ in topic_model.get_topic(topic)] 
               for topic in range(len(set(topics))-1)]

# Evaluate
coherence_model = CoherenceModel(topics=topic_words, 
                                 texts=tokens, 
                                 corpus=corpus,
                                 dictionary=dictionary, 
                                 coherence='c_v')
coherence = coherence_model.get_coherence()

@nadiafelix
Copy link
Author

nadiafelix commented Apr 15, 2021

I t

The following steps should be the correct ones in calculating the coherence scores. Some additional preprocessing is necessary since there is a very small part of that in BERTopic. Also, make sure to build the tokens with the exact same tokenizer as used in BERTopic.

I do want to stress that metrics such as c_v and c_npmi are merely proxies for a topic model's performance. They are by no means a ground truth and can have significant issues (e.g., sensitive to the number of words in a topic). So whether you find a low or high score, I would advise you to look at the topics yourself and see if they make sense to you.

import gensim.corpora as corpora
from gensim.models.coherencemodel import CoherenceModel

# Preprocess documents
cleaned_docs = topic_model._preprocess_text(docs)

# Extract vectorizer and tokenizer from BERTopic
vectorizer = topic_model.vectorizer_model
tokenizer = vectorizer.build_tokenizer()

# Extract features for Topic Coherence evaluation
words = vectorizer.get_feature_names()
tokens = [tokenizer(doc) for doc in cleaned_docs]
dictionary = corpora.Dictionary(tokens)
corpus = [dictionary.doc2bow(token) for token in tokens]
topic_words = [[words for words, _ in topic_model.get_topic(topic)] 
               for topic in range(len(set(topics))-1)]

# Evaluate
coherence_model = CoherenceModel(topics=topic_words, 
                                 texts=tokens, 
                                 corpus=corpus,
                                 dictionary=dictionary, 
                                 coherence='c_v')
coherence = coherence_model.get_coherence()

Hello MaartenGr, I tried to execute this, but the problem is the tokenizer. My Bertopic model got topics with ngrams from 1 to 10 and the tokenizer here got tokens with only one term (1-gram). When I considere n_gram_range=(1,1) like this
topic_model = BERTopic(verbose=True, embedding_model=embedder, n_gram_range=(1,1), calculate_probabilities=True) I get the coherence value, that in this case was 0.1725 for 'c_v', -0.2662 for c_npmi, and -8.5744 for u_mass.

@MaartenGr
Copy link
Owner

Good catch, I did not test for higher n-grams in the example. I made two changes:

  • Used the build_analyzer() instead of build_tokenizer() which allows for n-gram tokenization
  • Preprocessing is now based on a collection of documents per topic, since the CountVectorizer was trained on that data

Tested it with several ranges of n-grams and it seems to work now.

from bertopic import BERTopic
import gensim.corpora as corpora
from gensim.models.coherencemodel import CoherenceModel

topic_model = BERTopic(verbose=True, n_gram_range=(1, 3))
topics, _ = topic_model.fit_transform(docs)

# Preprocess Documents
documents = pd.DataFrame({"Document": docs,
                          "ID": range(len(docs)),
                          "Topic": topics})
documents_per_topic = documents.groupby(['Topic'], as_index=False).agg({'Document': ' '.join})
cleaned_docs = topic_model._preprocess_text(documents_per_topic.Document.values)

# Extract vectorizer and analyzer from BERTopic
vectorizer = topic_model.vectorizer_model
analyzer = vectorizer.build_analyzer()

# Extract features for Topic Coherence evaluation
words = vectorizer.get_feature_names()
tokens = [analyzer(doc) for doc in cleaned_docs]
dictionary = corpora.Dictionary(tokens)
corpus = [dictionary.doc2bow(token) for token in tokens]
topic_words = [[words for words, _ in topic_model.get_topic(topic)] 
               for topic in range(len(set(topics))-1)]

# Evaluate
coherence_model = CoherenceModel(topics=topic_words, 
                                 texts=tokens, 
                                 corpus=corpus,
                                 dictionary=dictionary, 
                                 coherence='c_v')
coherence = coherence_model.get_coherence()

@nadiafelix
Copy link
Author

Great! Thanks a lot!

@YuanyuanLi96
Copy link

Hi Maarten, thanks for the code of calculating coherence score. I am wondering which parameter I can tune using coherence score. I tried min_topic_size =10, 7, 5, and it seems the coherence score is increasing as min_topic_size decreases. But it doesn't make sense to me to further reduce min_topic_size.

Is coherence score always decreasing as reducing min_topic_size(number of topics seems increasing)? And what else parameter you recommend to tune for a small dataset (about 1000 sentences)?

@MaartenGr
Copy link
Owner

@YuanyuanLi96 In general, I would not advise you to use this coherence score to fine-tune BERTopic. These metrics are merely procies for a topic model's performance. They are by no means a ground truth and can have significant issues (e.g., sensitive to the number of words in a topic). So whether you find a low or high score, I would advise you to look at the topics yourself and see if they make sense to you.

Having said that, by reducing min_topic_size the total amount of topics increases which simply leads to more information depending on the coherence metric used.

When it comes to tuning a small dataset, I would focus on keeping a logical min_topic_size of at least 20 since topics should contain sufficient documents. Moreover, with 1000 sentences, you can question whether a topic modeling technique is actually necessary.

@YuanyuanLi96
Copy link

@MaartenGr Thanks for your explanation and suggestion! I tried to let min_topic_size =20, and I can get 16 mostly interpretable topics for my data. So I will go with this, since it performs better than other models and reduces out labor work in the long term. Thanks for this amazing package!

@TomNachman
Copy link

Hi @MaartenGr , regarding the conversation here and your reply to YuanyuanLi96, currently the only available measurements i found to evaluate a Topic Model is by Coherence(Umass,NPMI etc..) and Perplexity scores which both have their downsides, beside human judgement which like you said "I would advise you to look at the topics yourself and see if they make sense to you" is there any other measurement you suggest?

in short...if i have a LDA model and a ERTopic model trained on the same data and apply the same number of topics on both,how would i know which is more accurate?

@MaartenGr
Copy link
Owner

@TomNachman There are a few things that are important here.

What is the definition of "accurate". Is that topic coherence? Quality (density or separation) of clusters? Predictive power? Distribution of topics? Etc. Defining accuracy or quality first is important in knowing if one topic model is better than another. What the best metric to use is highly depends on your use case but it seems that in literature npmi is mostly used together with topic diversity. These metrics are typically used to evaluate the coherence and diversity of topic modeling techniques.

Moreover, I am often very hesitant when it comes to recommending a coherence metric to use. You can quickly overfit on such a metric when tuning the parameters of BERTopic (or any other topic modeling technique) which in practice might result in poor performance. In other words, I want to prevent users from solely focusing on grid-searching parameters and motivate users to look at the results.

Having said that, that does not mean that these metrics cannot be used! They are extremely useful in the right circumstances. So when you want to compare topic models, definitely use these kinds of metrics (e.g., npmi) but make sure the circumstances make sense. For example, they need to have the same number of topics and the same number of words need to be in those topics. If you were to change how the data were to be preprocessed, are you then objectively evaluating the difference in performance between topic modeling techniques?

I want to end with a great package for evaluating your topic model, namely OCTIS. It has many evaluation measures implemented aside from the standard coherence metrics, such as topic diversity, similarity, and classification metrics. I would advise choosing an evaluation metric there that best suits your use case.

@PoonooP
Copy link

PoonooP commented Dec 31, 2021

The following steps should be the correct ones in calculating the coherence scores. Some additional preprocessing is necessary since there is a very small part of that in BERTopic. Also, make sure to build the tokens with the exact same tokenizer as used in BERTopic.

I do want to stress that metrics such as c_v and c_npmi are merely proxies for a topic model's performance. They are by no means a ground truth and can have significant issues (e.g., sensitive to the number of words in a topic). So whether you find a low or high score, I would advise you to look at the topics yourself and see if they make sense to you.

import gensim.corpora as corpora
from gensim.models.coherencemodel import CoherenceModel

# Preprocess documents
cleaned_docs = topic_model._preprocess_text(docs)

# Extract vectorizer and tokenizer from BERTopic
vectorizer = topic_model.vectorizer_model
tokenizer = vectorizer.build_tokenizer()

# Extract features for Topic Coherence evaluation
words = vectorizer.get_feature_names()
tokens = [tokenizer(doc) for doc in cleaned_docs]
dictionary = corpora.Dictionary(tokens)
corpus = [dictionary.doc2bow(token) for token in tokens]
topic_words = [[words for words, _ in topic_model.get_topic(topic)] 
               for topic in range(len(set(topics))-1)]

# Evaluate
coherence_model = CoherenceModel(topics=topic_words, 
                                 texts=tokens, 
                                 corpus=corpus,
                                 dictionary=dictionary, 
                                 coherence='c_v')
coherence = coherence_model.get_coherence()

Hello Maarten,
I tried to execute this code, but it just gave me the
"raise ValueError('unable to interpret topics either a list of tokens or a list of ids')
ValueError: unable to interpret topic as either list of tokens or a list of ids"

I was tuning the hyperparameters top_n_words and min_topic_size. I basically use the above code as a function to evaluate my topic model quality. It seems that the code does not work for a certain set of values of the two parameters(in my case, it's top_n_words = 5 and min_topic_size =28), while it managed to provide the coherence score for the rest of the pairs.

It's even more peculiar because I'd executed the same thing the other day and there was no issue. The only difference here is I used to a different set of data, although they were preprocessed similarly and had identical structure.

@MaartenGr
Copy link
Owner

It might be worthwhile to check the differences in output between the output variables for your two sets of data (e.g., topic_words, corpus, etc.). If all parameters are the same but the only thing you changed is the data, then there might be something happening with the results that you get from training on that data. So checking things like the topics and their representation might help you understand what is happening there. For example, it might be the case that you have too few topics generated for it to calculate the coherence.

@hwrightson
Copy link

Good afternoon Maarten,

Thank you very much for pulling this together, I recognise that coherence score isn't necessarily the best option to determine accuracy, but it's a useful proxy to consider. Having taken a brief look at the code I've notice that:

words = vectorizer.get_feature_names()

Isn't referred to elsewhere in the code, can this line be omitted or does it serve a further purpose?

Thanks in advance, H

@MaartenGr
Copy link
Owner

@hwrightson You are completely right! It is definitely a useful proxy to consider when validating your model. NPMI, for example, has shown promise in emulating human performance (1). A topic coherence score in conjunction with visual checks definitely prevents issues later on.

Isn't referred to elsewhere in the code, can this line be omitted or does it serve a further purpose?

Good catch, I might have used it for something else whilst testing out calculating coherence scores. So yes, you can omit that line!

@drob-xx
Copy link

drob-xx commented Feb 16, 2022

@MaartenGr I've been delving into model evaluation and, at your suggestion, am using OCTIS. In my first set of experiments I compared the OCTIS metrics for topic diversity, inverted rbo, and npmi coherence. The results I got for inverted rbo seem promising, the others noisy. As you've clearly explained the choice of metric is highly dependent on the use case. I've begun looking for resources for more information on topic model evaluation metrics and am wondering if you have any suggestions? Two papers I found helpful were A review of topic modeling methods and Measuring LDA topic stability from clusters of replicated runs. As you know OCTIS contains over twenty different metrics. Some I'm familiar with, but most not. As far as I can tell they don't provide references for their implementations. Thanks as always in advance!

P.S. Of course right after writing this I remembered that I hadn't gone back to the paper the OCTIS people wrote OCTIS: Comparing and Optimizing Topic models is Simple!!. So anything you suggest that is not referenced there would be super.

@MaartenGr
Copy link
Owner

@drob-xx Great to hear that you have been working with OCTIS! You might have already seen it, but aside from in the paper itself, some of the references to the evaluation metrics can be found here.

The field of evaluation metrics is a tricky one, there are many different use cases for topic modeling techniques, and topic modeling, by nature, is a subjective method that is often reflected in the evaluation metrics. Over the last years, there have been several papers describing the pros and cons of these metrics:

@inproceedings{lau2014machine,
  title={Machine reading tea leaves: Automatically evaluating topic coherence and topic model quality},
  author={Lau, Jey Han and Newman, David and Baldwin, Timothy},
  booktitle={Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics},
  pages={530--539},
  year={2014}
}

@inproceedings{mimno2011optimizing,
  title={Optimizing semantic coherence in topic models},
  author={Mimno, David and Wallach, Hanna and Talley, Edmund and Leenders, Miriam and McCallum, Andrew},
  booktitle={Proceedings of the 2011 conference on empirical methods in natural language processing},
  pages={262--272},
  year={2011}
}

@inproceedings{roder2015exploring,
  title={Exploring the space of topic coherence measures},
  author={R{\"o}der, Michael and Both, Andreas and Hinneburg, Alexander},
  booktitle={Proceedings of the eighth ACM international conference on Web search and data mining},
  pages={399--408},
  year={2015}
}

@article{o2015analysis,
  title={An analysis of the coherence of descriptors in topic modeling},
  author={O’callaghan, Derek and Greene, Derek and Carthy, Joe and Cunningham, P{\'a}draig},
  journal={Expert Systems with Applications},
  volume={42},
  number={13},
  pages={5645--5657},
  year={2015},
  publisher={Elsevier}
}

P.S. Of course right after writing this I remembered that I hadn't gone back to the paper the OCTIS people wrote OCTIS: Comparing and Optimizing Topic models is Simple!!. So anything you suggest that is not referenced there would be super.

That has happened to me more times than I would like to admit! The metrics that you find in the paper and in OCTIS are, at least in my experience, the most common metrics that you see in academia. Especially NPMI and Topic Diversity are frequently used metrics as a proxy of the "quality" of these topic modeling techniques.

One thing that might be interesting to look at is clustering metrics. Essentially, BERTopic is a clustering algorithm with a topic representation on top. The assumption here is that good clusters lead to good topic representations. Thus, in order to have a good model, you will need good clusters. You can find some of these metrics here but be aware that some of these might need labels to judge the quality of the generated clusters.

@juli-sch
Copy link

juli-sch commented Mar 1, 2022

Hello Maarten, I would also like to include Octis in my evaluation of BERTopic's findings. If I understand you correctly in Issues #144 and #331, the following lines should give me the topic-word-matrix I need for Octis:

topic_word_matrix = topic_model.c_tf_idf.toarray()
topic_word_matrix = np.delete(topic_word_matrix, obj=0, axis=0)

Is that correct?

When I initialise BERTopic with topic_diversity=None MMR is not used and the c-TF-IDF then is fully representative of the topic representation. Is this assumption correct?

Many thanks in advance for the help

@meh369
Copy link

meh369 commented Mar 5, 2023

@MaartenGr , Thank you so much for testing and pointing out the mistake in time. - I'm still learning, so I really appreciate your help!

@zhimin-z
Copy link

zhimin-z commented Mar 5, 2023

Ah yes, it should either be .get_feature_names() or .get_feature_names_out() depending on your scikit-learn version.

Thanks so much! @MaartenGr This piece of code indeed solves the empty topics issues that is torturing me for quite a while.

@RamziRahli
Copy link

Hello @MaartenGr,
I want to use Bertopic on my data but I'm hesitating between 3 embedding Models. I'm trying to use the evaluation provided Here and OCTIS to calculate the diversity and coherence of each model but I failed. Could you provide me with an example of how I could do this, possibly using cuML please.
Thank you !

@MaartenGr
Copy link
Owner

@RamziRahli That repo was merely for the evaluation of experiments in the paper and was not meant to be generally used. Instead, I would advise performing the evaluations yourself using the guidelines in OCTIS or using Gensim with the provided example here.

@RamziRahli
Copy link

RamziRahli commented Jul 12, 2023

@RamziRahli That repo was merely for the evaluation of experiments in the paper and was not meant to be generally used. Instead, I would advise performing the evaluations yourself using the guidelines in OCTIS or using Gensim with the provided example here.

@MaartenGr
I tried to calculate the consistency on 500K relatively short document (150 character maximum) as in the example but it takes more than 24H, is this normal?

@MaartenGr
Copy link
Owner

@RamziRahli That is difficult to say without seeing the actual code (and feel free to create an issue for this) but it would not be unsurprising depending on your setup. Calculating coherence measures is notoriously slow.

@rizkiamandaputri
Copy link

rizkiamandaputri commented Sep 13, 2023

Hello Everyone!
I just wanna ask a question. I tried to print out the bertopic's coherence score into interface but I got error :
'numpy.float64' object has no attribute 'get_coherence' . And here is my code :

documents = pd.DataFrame({"Document": texts,
                      "ID": range(len(texts)),
                      "Topic": topics})
documents_per_topic = documents.groupby(['Topic'], as_index=False).agg({'Document': ' '.join})
cleaned_docs = topic_model_n._preprocess_text(documents_per_topic.Document.values)

# Extract vectorizer and analyzer from BERTopic
vectorizer = topic_model_n.vectorizer_model
analyzer = vectorizer.build_analyzer()

# Extract features for Topic Coherence evaluation
words = vectorizer.get_feature_names_out()
tokens = [analyzer(doc) for doc in cleaned_docs]
dictionary = corpora.Dictionary(tokens)
corpus = [dictionary.doc2bow(token) for token in tokens]
topic_words = [[words for words, _ in topic_model.get_topic(topic)] 
               for topic in range(len(set(topics))-1)]
# topic_words = [[dictionary.token2id[w] for w in words if w in dictionary.token2id]
# for _ in range(topic_model_n.nr_topics)]

# Evaluate
coherence_cv = CoherenceModel(topics=topic_words, 
                                 texts=tokens, 
                                 corpus=corpus,
                                 dictionary=dictionary, 
                                 coherence='c_v')
coherence = coherence_cv.get_coherence()

# Print Data Evaluation
topic_eval = coherence.get_coherence()

res = topic_eval.to_json(orient="records")
parsed = json.loads(res)
json_topic_evaluation = parsed

How to solve this error? Explain to me how to solve it. Thank you.

@MaartenGr
Copy link
Owner

You should not run coherence.get_coherence() since coherence is already the result. In other words, remove the following:

# Print Data Evaluation
topic_eval = coherence.get_coherence()

@rizkiamandaputri
Copy link

rizkiamandaputri commented Sep 13, 2023

You should not run coherence.get_coherence() since coherence is already the result. In other words, remove the following:

# Print Data Evaluation
topic_eval = coherence.get_coherence()

I got same error like before : 'numpy.float64' object has no attribute 'to_json', this is the code :

coherence_cv = CoherenceModel(topics=topic_words, 
                                 texts=tokens, 
                                 corpus=corpus,
                                 dictionary=dictionary, 
                                 coherence='c_v')
coherence = coherence_cv.get_coherence()

# Print Data Evaluation
res = coherence.to_json(orient="records")
parsed = json.loads(res)
json_topic_evaluation = parsed

@MaartenGr
Copy link
Owner

The type of coherence is a numpy.float64 which means it is just a single value. If you want to save that single value as json, you would have to check yourself how to save a numpy float to json. Also, since it is a numpy.float64 it does not have a to_json function. I would advise checking a few tutorials on using json in python.

@mike-bmnn
Copy link

mike-bmnn commented Oct 12, 2023

@MaartenGr Is it generally a good or bad idea to use a representation model while evaluating the coherence score of a model? I noticed that using KeyBERTInspired while evaluating the coherence score yields different results than using none. Although I have to say that the different scores are still very similar.

@MaartenGr
Copy link
Owner

@mike-asw It depends. If the representation model that you use is important for your use case, then you should definitely include it in the evaluation. The multiple scores also give you an idea of the effect of representation models on the resulting coherence evaluation metric.

I do think that when you include representation models and you run evaluation metrics, you should definitely include these representation models in the evaluation procedure. It always surprised me that when evaluating BERTopic, many users/researchers tend to focus on only the base representation when there are so many more to choose from.

@ninavdPipple
Copy link

Hi Maarten,

I was looking at the discussion above and figured at some point you switch from the tokenizer to the analyzer in order to be able to perform n_gram tokenization. In my code both implementations seem to work, however they give very different coherence values. I do specify a n_gram range in my CountVectorizer. Which of the two (tokenizer or analyzer) will give the ‘correct’ coherence value in my case, if such a notion even exists? Or what should be considered in picking one of the two?

Thanks in advance!

@MaartenGr
Copy link
Owner

@ninavdPipple As you mentioned, there is no "correct" coherence value. It all depends on the reasons why you would choose the tokenizer over the analyzer or vice versa. Having said that, since you are using ngram_range it makes sense to choose the one that actually supports n-grams. If the differences are large, then it might be worthwhile to research why that may be the case and mention that in your research.

@benearnthof
Copy link

topic_words = [[dictionary.token2id[w] for w in words if w in dictionary.token2id] for _ in range(topic_model.nr_topics)]

@meh369 This does not create topic words per topic but multiple identical lists of tokens, so I do not think the model is correctly evaluated here.

In the code I mentioned here, there is the following line that you can adjust to skip topics that only contain empty values:

topic_words = [[words for words, _ in topic_model.get_topic(topic)]
               for topic in range(len(set(topics))-1)]

What you want here is to make sure that two things are prevented:

* Passing words that are not found in the dictionary
  
  * These are typically empty words

* Topics are completely empty

First, let's create a reproducible topic model that has some topics that topics that contain empty words

from umap import UMAP
from bertopic import BERTopic
from sklearn.datasets import fetch_20newsgroups
from sentence_transformers import SentenceTransformer
from sklearn.feature_extraction.text import CountVectorizer

# Prepare embeddings
docs = fetch_20newsgroups(subset='all',  remove=('headers', 'footers', 'quotes'))['data']
docs = [doc for doc in docs if len(doc) >= 10]
docs += ["the"] * 100
sentence_model = SentenceTransformer("all-MiniLM-L6-v2")
embeddings = sentence_model.encode(docs, show_progress_bar=True)

# Train topic model
vectorizer_model = CountVectorizer(stop_words="english", ngram_range=(1, 2))
umap_model = UMAP(n_neighbors=15, n_components=5, min_dist=0.0, metric='cosine', random_state=42)

topic_model = BERTopic(umap_model=umap_model, vectorizer_model=vectorizer_model, verbose=True, min_topic_size=50)
topics, probs = topic_model.fit_transform(docs, embeddings)

Now, we can start calculating the coherence score and making sure that empty words are not passed to the CoherenceModel as well as topics that do not contain any words:

from bertopic import BERTopic
import gensim.corpora as corpora
from gensim.models.coherencemodel import CoherenceModel
import pandas as pd

# Preprocess Documents
documents = pd.DataFrame({"Document": docs,
                          "ID": range(len(docs)),
                          "Topic": topics})
documents_per_topic = documents.groupby(['Topic'], as_index=False).agg({'Document': ' '.join})
cleaned_docs = topic_model._preprocess_text(documents_per_topic.Document.values)

# Extract vectorizer and analyzer from BERTopic
vectorizer = topic_model.vectorizer_model
analyzer = vectorizer.build_analyzer()

# Use .get_feature_names_out() if you get an error with .get_feature_names()
words = vectorizer.get_feature_names()

# Extract features for Topic Coherence evaluation
tokens = [analyzer(doc) for doc in cleaned_docs]
dictionary = corpora.Dictionary(tokens)
corpus = [dictionary.doc2bow(token) for token in tokens]

# Extract words in each topic if they are non-empty and exist in the dictionary
topic_words = []
for topic in range(len(set(topics))-topic_model._outliers):
    words = list(zip(*topic_model.get_topic(topic)))[0]
    words = [word for word in words if word in dictionary.token2id]
    topic_words.append(words)
topic_words = [words for words in topic_words if len(words) > 0]

# Evaluate Coherence
coherence_model = CoherenceModel(topics=topic_words, 
                                 texts=tokens, 
                                 corpus=corpus,
                                 dictionary=dictionary, 
                                 coherence='c_v')
coherence = coherence_model.get_coherence()

Hi, I'm currently using this code to calculate coherence measures for topic models based on arxiv preprints and the line coherence = coherence_model.get_coherence() keeps running out of memory and my python session crashes with the console output "Killed". Did anyone else run into this problem? The problem persists for corpora larger than 12000 documents.

@MaartenGr
Copy link
Owner

@benearnthof Calculating coherence scores takes a lot of memory and I am not familiar with any more efficient techniques. Making sure you have enough RAM is definitely important here. Also, make sure that your vocab is unnecessarily large when you are using n-grams. The min_df parameter definitely helps here.

@benearnthof
Copy link

@MaartenGr I have experimented with mmcorpus but will give min_df a shot, thanks for the swift reply!

@abis330
Copy link

abis330 commented Dec 14, 2023

I tried plotting coherence score (c_v) against number of topics where I am changing hyperparameters "n_neighbors", "n_components" for UMAP function passed and "cluster_selection_epsilon", "min_cluster_size" for HDBSCAN function passed to BERTopic.

When I see the nature of graph it shows that it has a monotonically decreasing nature of graph. Shouldn't we expect it either otherwise or have maximum somewhere where before it was increasing upto that point and then it decreases after that point.

It is weird that the coherence score always seemed to be decreasing with increase in number of topics.

I could use some feedback ASAP. @MaartenGr

@MaartenGr
Copy link
Owner

@abinashsinha330 It is difficult to say without knowing every specific about your data, use case, type of coherence (e.g., c_v vs. npmi), etc. For example, it could simply be that you have little data available for each topic that you add and therefore, the topic representations are not as good as the first few. Of course, this could also depend on the representation model that you choose.

However, after a quick Google search, you can find several papers that not only have this phenomenon but also observe that the coherence score might increase again after a certain point. You can do some research on your chosen coherence score and get an intuition about how it works. Then, you can experiment and research why your specific graphs appear.

Do note that this issue thread is mostly focused on evaluation in general and, as you might have read here, I am generally against such a large focus on only coherence. So my main advice would be not to focus that much on coherence scores only and create a broad evaluation of your topic model. The thought that a topic model should only be evaluated by a coherence score (whatever that exactly means with different metrics) can get you into trouble when using the model in practice.

@nickgiann
Copy link

nickgiann commented Apr 14, 2024

Hi @MaartenGr ,

I noticed that in your provided example for calculating coherence scores, the entire corpus is used for both fitting and evaluation. I'm interested in your perspective on incorporating a train-test split for model assessment. Would this improve the evaluation's robustness by measuring generalizability to unseen data, or might it lead to non-representative coherence scores?

Thanks in advance!

@MaartenGr
Copy link
Owner

@nickgiann Hmmm, I seldom see train/test splits for that since you would still need to have the same vocabulary used across splits which in turn require the entire corpus to be passed.

The thing is that unseen data does not influence the training of BERTopic and whenever you run .transform it only updates the topic assignment and not the topic representation. So unseen data, at least from that perspective, should not influence the coherence score unless you are looking at incremental topic modeling settings.

@romybeaute
Copy link

Dear @MaartenGr, thank you so much for all your useful advices above. Having had compatibilities issues with OCTIS, I am trying to find an alternative way to do hyperparameter tuning (wrt coherence measure). I tried creating a BERTopic Grid Search wrapper, in which i define manually the consistency function :

class BERTopicGridSearchWrapper(BaseEstimator):
    def __init__(self, vectorizer_model, embedding_model, n_neighbors=10, n_components=5, min_dist=0.01, min_cluster_size=10, min_samples=None, top_n_words=5):
        self.vectorizer_model = vectorizer_model
        self.embedding_model = embedding_model
        self.n_neighbors = n_neighbors
        self.n_components = n_components
        self.min_dist = min_dist
        self.min_cluster_size = min_cluster_size
        self.min_samples = min_samples
        self.top_n_words = top_n_words
        self.model = None


    def fit(self, X):
        
        umap_model = UMAP(n_neighbors=self.n_neighbors, n_components=self.n_components, min_dist=self.min_dist, random_state=77)
        hdbscan_model = HDBSCAN(min_cluster_size=self.min_cluster_size, min_samples=self.min_samples, prediction_data=True)

        self.model = BERTopic(umap_model=umap_model, 
                              hdbscan_model=hdbscan_model,
                              embedding_model=self.embedding_model,
                              vectorizer_model=self.vectorizer_model,
                              top_n_words=self.top_n_words,
                              language='english',
                              calculate_probabilities=True,
                              verbose=True)
        self.model.fit_transform(X)
        return self

    def score(self, X):
        coherence_score = calculate_coherence(self.model, X)
        return coherence_score

def calculate_coherence(topic_model, data):

    topics, _ = topic_model.fit_transform(data)
    # Preprocess Documents
    documents = pd.DataFrame({"Document": data,
                          "ID": range(len(data)),
                          "Topic": topics})
    documents_per_topic = documents.groupby(['Topic'], as_index=False).agg({'Document': ' '.join})
    
    #Extracting the vectorizer and embedding model from BERTopic model
    vectorizer = topic_model.vectorizer_model #CountVectorizer of BERTopic model 
    tokenizer = vectorizer.build_tokenizer()
    analyzer = vectorizer.build_analyzer() #allows for n-gram tokenization
    
    # Extract features for Topic Coherence evaluation
    words = vectorizer.get_feature_names_out()
    tokens = [tokenizer(doc) for doc in data]
    # tokens = [analyzer(doc) for doc in data]

    dictionary = corpora.Dictionary(tokens)
    corpus = [dictionary.doc2bow(token) for token in tokens]

    topic_words = [[word for word, _ in topic_model.get_topic(topic_id)] for topic_id in range(len(set(topics))-1)]

    print("Topics:", topic_words)
    coherence_model = CoherenceModel(topics=topic_words, 
                                     texts=tokens, 
                                     corpus=corpus,
                                     dictionary=dictionary, 
                                     coherence='c_v')
    coherence_score = coherence_model.get_coherence()
    return coherence_score
However, when I run my gridsearch : 
grid_search = GridSearchCV(BERTopicGridSearchWrapper(vectorizer_model, embedding_model),
                           param_grid=params_grid,
                           cv=None,
                           scoring=make_scorer(calculate_coherence),
                           verbose=10)

# Fit grid search
grid_search.fit(reports_filtered)

print("Best parameters:", grid_search.best_params_)
print("Best coherence score:", grid_search.best_score_)

I keep getting "nan" as my coherence scores ([CV 1/5; 1/8] END min_cluster_size=5, min_dist=0.01, min_samples=None, n_components=3, n_neighbors=3;, **score=nan** total time= 3.4s)

I have been trying to find the source of this issue for a while, and among the debugging attempts, I found that when I use the wrapper alone :

wrapper = BERTopicGridSearchWrapper(vectorizer_model=vectorizer_model, embedding_model=SentenceTransformer('all-MiniLM-L6-v2'))
wrapper.fit(reports_filtered.tolist())  
coherence = wrapper.score(reports_filtered.tolist())
print(coherence)

I obtain a coherence score.

Do you have any idea of what is going on here, and what I might have wrong ?
Thank you so much for your attention!

All the best,
Romy

@MaartenGr
Copy link
Owner

@romybeaute Unfortunately, I'm not that familiar with how a customized GridSearchWrapper should be implemented within scikit-learn. You potentially could do it manually since there is no cross-validation involved in your example. It would be looping over parameters and nothing more if I'm not mistaken.

@romybeaute
Copy link

romybeaute commented Jun 13, 2024

Dear @MaartenGr , thanks a lot for your previous answer to my question. I have been applying your advices, and now have a csv file containing the different combinations that have been tested, and their respective coherence score and number of topics (grid_search_results_HandWritten_seed22.csv). But the best coherence results lead to very few topics created. So I am in the situation where I need to find a balance between coherence score and a number of extracted topics that is reasonable for my research. But this choice seems quite subjective... Is it something acceptable to do ? Would you recommend any other - more objective - method to select the number of extracted of topics (and therefore the hyperparameters combinations that lead to this number of extracted topics)?
Moreover, would you recommend also doing cross validation with BERTopic ? It was not mentioned in the (amazing) tutorials that you uploaded online, so was wondering how robust are our results if no CV.
Many thanks for your precious help,
Romy

@MaartenGr
Copy link
Owner

@romybeaute

But the best coherence results lead to very few topics created.

That's indeed the problem I have with using topic coherence and grid search together, you are not likely to end up with the quality of topics that you are looking for. As such, and as you can see throughout this issue, I would definitely not recommend grid-searching topic coherence only. It is important to first take note of what "performance" or "quality" means in your specific use case and derive metrics based on that. Topic coherence by itself tells you so little about a topic model, especially when you take into account the other perspectives of what a topic model can be good at, such as assignment of topics, diversity of topics, accuracy of topics rather coherence, etc.

But this choice seems quite subjective... Is it something acceptable to do ? Would you recommend any other - more objective - method to select the number of extracted of topics (and therefore the hyperparameters combinations that lead to this number of extracted topics)?

It is indeed subjective but that is not necessarily a bad thing because your use case is subjective. You have certain requirements for your specific use case and one of which is the number of extracted topics. It would be more than reasonable to say that having 2 topics in your 1 million documents makes no sense and based on your familiarity with the data, there are at least n topics.

If you want a purely objective measure for something that is inherently subjective, that will prove to be quite difficult. Instead, I generally advise a mix. You can use proxy measures such as topic coherence and diversity as the "objective" measures (note they are not ground-truth metrics) and "subjective" information such as limiting the number of topics to a certain minimum.

All in all, I would advise starting from the metric itself. Why is optimizing for only topic coherence so important for your use case?

Moreover, would you recommend also doing cross validation with BERTopic ? It was not mentioned in the (amazing) tutorials that you uploaded online, so was wondering how robust are our results if no CV.

What would be the splits and evaluation here? Normally, you would train on 80% of the data here and perform inference on 20%. In the context of topic coherence, there is no inference involved, only training.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests