Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Randomvars #48

Closed
wants to merge 7 commits into from
Closed

Randomvars #48

wants to merge 7 commits into from

Conversation

jpn--
Copy link
Contributor

@jpn-- jpn-- commented Feb 15, 2019

This is a start of what I am thinking about to address #46, allowing arbitrary distributions. I have not written any unit tests for it yet, nor solved anything about unbounded distributions discussed in #46, however it does pass all existing unit tests and does not break anything as far as I can see.

@coveralls
Copy link

coveralls commented Feb 16, 2019

Coverage Status

Coverage decreased (-0.1%) to 73.735% when pulling 69ea09f on jpn--:randomvars into e83946d on quaquel:master.

This makes sure they stay in sync with the rv_gen if that is changed after Parameter construction.
@quaquel
Copy link
Owner

quaquel commented Apr 23, 2019

I have started in a new 2.1 branch on integrating this idea. However, after thinking about it at length, I have decided to use a factory method from_stats to support alternative distributions. The reason for this is twofold: (1) in exploratory modelling using a uniform distribution is the default option, I want to keep it this way; (2) alternative distributions can be supported but at the api level, I want it to look clearly different, using from_stats achieves this.

At present, the parameters.py file is finished. I have implemented the functionality in a slightly different way that you did (using various python tricks to keep the code clean etc.)

Next up is editing the samplers. Here I want to do a few things:

  • add an LHS sampler that uses upper and lower bounds and defaults to uniform (as you have)
  • add a sampler that generates an LHS over the deep uncertainties, as well as an LHS over the well characterised uncertainties, and then creates a full factorial over both sets of samples. In my personal view, this is the most defendable approach for handling deep and well characterised uncertainties simultaneously.
  • update various utility functions (like loading parameters from a csv, saving to a csv, the repr magic method)

Do you have any further thoughts or requests in this space?

@jpn--
Copy link
Contributor Author

jpn-- commented Apr 23, 2019

This sounds reasonable. You may want to add a flag on uncertainties to mark if they are deep or well-characterized, as that won't be identifiable from the distribution alone (well characterized uncertainties can have uniform distributions).

@quaquel
Copy link
Owner

quaquel commented May 3, 2019

I have implemented all this drawing from your code in the 2.1 branch

@quaquel quaquel closed this May 3, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants