-
-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to spread the local evidence score using Markov chain #34
Comments
Hi @sareaghaei, Sure!
If you have a probability distribution over nodes of a Markov chain "p", you can get the probability distribution after following one edge in the graph by computing "T p" (where "T" is the transition matrix and p is represented as a vector). Similarly, if you want to get the distribution after k steps, you can compute "T^k p". Let me know if anything is still unclear! |
Thanks for yr reply. |
It is done here:
Maybe! It's hard to say without trying :) |
Hi Antonin
Thanks for your talented work.I appreciate it if u can explain some parts of the paper for me.
1- As far as I understand, a vector of features F is computed for each entity e as its local compatibility.
The third feature of the vector is log p(e). What is it exactly? it has been mentioned that it is a log-linear combination of the number of statements, site likes and PageRrank but based on the code, it seems to be based on only the PageRank.
2- The output of semantic similarity step is a column-stochastic matrix Md. I can not get how to use the local compatibility vector and the similarity matrix Md in order to define the final feature vectors for the classification. Could u further clarify equation (1) of the paper?
The text was updated successfully, but these errors were encountered: