Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is H(rule) really necessary? #11

Open
lihuiliullh opened this issue Apr 20, 2022 · 1 comment
Open

Is H(rule) really necessary? #11

lihuiliullh opened this issue Apr 20, 2022 · 1 comment

Comments

@lihuiliullh
Copy link

Hi, it is really a nice paper. But I have a question.

May I know why you need to use Eq. (7) and (8) to approximate the posterior?
My idea is that in the E-step, you need to identify k rules with the best quality.
Since image
you can simply calculate prior x likelihood for each rule in z_hat, and choose the top-k rules.

But what you do is to calculate the H(rule) for each rule and choose the top-k rules.
Since H(rule) is a approximate of the posterior distribution which is proportion to prior x likelihood, it will have the same effect as using prior x likelihood.

It seems to me that all the proof and proposition in Section 3.3 E-step is unnecessary.

@mnqu

@mnqu
Copy link
Collaborator

mnqu commented May 2, 2022

Thanks for your interests, and this is a good question.

The reason is that z_I here is a set of logic rules, so the prior and posterior are defined on a set of logic rules, rather than a single logic rule. Therefore, we propose to use approximation inference to infer the posterior, where H(rule) is calculated.

Does it make sense to you?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants