tags | comments | dg-publish | ||
---|---|---|---|---|
|
true |
true |
[!PREREQUISITE]
- Probability
A random variable represents an event whose outcome is unknown.
A probability distribution is an assignment of weights to outcomes, which must satisfies the following conditions:
(1) 0
-
Conditional Probability
-
Independent
- When A and B are mutually independent, P(A,B) = P(A)P(B), we write A⫫B. This is equivalent to B⫫A.
- If A and B are conditionally independent given C, then P(A,B|C) = P(A|C)P(B|C), we write A ⫫ B|C. This is also equivalent to B⫫A|C.
Given a joint PDF1, we can trivially compute any desired probability distribution P(
-
Query variables
, which are unknown and appear on the left side of the conditional bar(|) in the desired probability distribution. -
Evidence variables
, which are observed variables whose values are known and appear on the right side of the conditional bar(|) in the desired probability distribution. - Hidden variables, which are values present in the overall joint distribution but not in the desired distribution.
In Inference By Enumeration, we follow the following algorithm:
- Collect all the rows consistent with the observed evidence variables.
- Sum out (marginalize) all the hidden variables.
- Normalize the table so that it is a probability distribution (i.e. values sum to 1)
[!EXAMPLE]
If we wanted to compute P(W | S=winter) using the above joint distribution, we’d select the four rows where S is winter, then sum out over T and normalize.
Hence P(W=sun | S=winter) = 0.5 and P(W=rain | S=winter) = 0.5, and we learn that in winter there’s a 50% chance of sun and a 50% chance of rain.
Footnotes
-
PDF指的是概率密度函数(Probability Density Function),用于描述连续随机变量在某个特定值附近的相对可能性。 ↩