You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
sorry,i just read your paper.But there are so many questions,for example,after Eq.2,3,we can get the log-likelihood of a document,however,how to get the topic-word distribution and the topic-document distribution?
The text was updated successfully, but these errors were encountered:
Hi, I was confused by this issue as well. Here is my understanding:
The optimization object of the model is to maximize the log likelihood of the whole document P(v). Once you have finished the training process, you get the weight matrix W, which is a H by K matrix (where H and K represent the numbers of topic and vocabulary respectively).
Then you could apply this W matrix to equation (1) in the iDocNADE paper which is to compute the hidden state h. To be more specific, h is an H-dimensional vector which could be interpreted as the topic distribution over the topics.
To determine the topic of a new document, you have to input the collection of document words into the model and the final hidden state can be used as the representation of the whole document. Notice that this representation is actually the H-dimensional topic distribution as mentioned above. The topic of this new document can be obtained through this topic distribution.
I hope it helps you. If there is anything wrong, please point it out.
sorry,i just read your paper.But there are so many questions,for example,after Eq.2,3,we can get the log-likelihood of a document,however,how to get the topic-word distribution and the topic-document distribution?
The text was updated successfully, but these errors were encountered: