You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the official document, the weight field is explained as "the strength with which this edge expresses this assertion. A typical weight is 1, but weights can be higher or lower. All weights are positive."
However, I still don't know how the weights are determined in the graph. What's the metric to evaluate the weight value for each edge? by using occurrence frequency or other machine learning techniques such as random walk?
Thank you very much!
The text was updated successfully, but these errors were encountered:
The weights are determined fairly ad-hoc by the "reader" processes that take in various data sources.
For example, the weight of an assertion by one person on Open Mind Common Sense is 1.0; the weight of a WordNet edge is 2.0; the weight of a clue that a small number of people gave on Verbosity might be 0.25.
I would like there to be some machine-learning-powered self-checking involved in setting these weights, but there isn't yet.
In the official document, the weight field is explained as "the strength with which this edge expresses this assertion. A typical weight is 1, but weights can be higher or lower. All weights are positive."
However, I still don't know how the weights are determined in the graph. What's the metric to evaluate the weight value for each edge? by using occurrence frequency or other machine learning techniques such as random walk?
Thank you very much!
The text was updated successfully, but these errors were encountered: