Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Example request #97

Open
romanklis opened this issue Mar 5, 2018 · 3 comments
Open

Example request #97

romanklis opened this issue Mar 5, 2018 · 3 comments

Comments

@romanklis
Copy link

Dear Creators,

I deeply think this is one of the most powerful frameworks for creating graphical models and parameters learning Ive seen so far. Outstanding job.

I kindly wanted to request a few more simple easy to follow BN focused examples, such as classical sprinkler example http://web.eecs.utk.edu/~leparker/Courses/CS594-fall09/Lectures/12-Chapter14b-Oct22.pdf. Potentially with a hidden state twist - would be great in practice.

Best regards

@thjashin
Copy link
Collaborator

thjashin commented Mar 6, 2018

Thanks for the kind words! Could you be more specific on the example you request? From my understanding, it's a 4-node BN with all discrete variables. And exact inference is available after some tensor computation. What would you like the example to illustrate?

@romanklis
Copy link
Author

I though about a following use cases which would be extremely useful in practice:

(1) Utilization of a simple 6-node BN with discrete variables which would illustrate how to execute in such a setup parameter learning once we are given observations from all nodes expect one, as far as I understand it is latent variable.

Sample 1: [0, 1, 2, NaN, 0, 2]
Sample 2: [0, 0, 1, NaN, 0, 2]
Sample 2: [0, 0, 1, NaN, 0, 1]
and so on...

Once we have the network trained, i.e., we have complete joint probability distribution function, we would like to execute a range of inferences given, again, some incomplete observations.

  • Predict ? given a trained model and some observations: [2, 1, 2, 3, 0, ?]
  • Predict ? given a trained model and some missing observations: [NaN, 1, 2, 3, 0, ?]

(2) Again utilization of a simple 6-node BN with discrete variables which would illustrate how to execute in such a setup parameter learning once we are given potentially incomplete observations.

Sample 1: [NaN, 1, 2, 3, 0, 2]
Sample 2: [0, 0, 1, NaN, 0, NaN]
Sample 2: [0, 0, 1, NaN, NaN, 1]
and so on...

Once we have the network trained, i.e., we have complete joint probability distribution function, we would like to execute a range of inferences given, again, some incomplete observations.

  • Predict ? given a trained model and some observations: [2, 1, 2, 3, 0, ?]
  • Predict ? given a trained model and some missing observations: [NaN, 1, 2, 3, 0, ?]

(3) Utilization of a simple 6-node BN with discrete variables which would illustrate how to execute in such a setup parameter learning and some expert knowledge (prior adjustment for some particular node) and once we are given observations from all but this expert knowledge nodes expect one, as far as I understand it is latent variable.

  • Predict ? given a trained model and some observations and experts judgment : [2, 1, 2, 3, 0, ?]

I believe it would be very highly informative for non purely technical/academic audience.

Best regards
Roman

@thjashin
Copy link
Collaborator

thjashin commented Mar 6, 2018

Thanks for the details. I guess an interactive tutorial in jupyter notebook would be suitable for this and indeed we need one. Because now we are refactoring the modeling primitives for the 0.4 version. The plan for this tutorial shall come after that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants