Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dealing with correlations #29

Closed
mverzett opened this issue Apr 8, 2014 · 3 comments
Closed

Dealing with correlations #29

mverzett opened this issue Apr 8, 2014 · 3 comments

Comments

@mverzett
Copy link

mverzett commented Apr 8, 2014

Dear lebigot,

I am facing the problem of properly correlating the error sources in different observables (variables). I know that the package is supposed to properly deal with it, but that's only in case the error variable is actually the same.
I am in the unfortunate situation in which this is not the case and the same error source has different inpact on different observables. I was hoping that the tag attribute might do the trick but this example shows it's not the case:

>>> sys = ufloat(1, 0.02, 'sys')
>>> x = 5*sys # a 2% effect on the x observable 
>>> x
5.0+/-0.10000000000000001
>>> y = 10*sys # same error variable, same 2%
>>> y/x #this is handled properly
2.0+/-0
>>> x+y #this too
15.0+/-0.29999999999999999
>>> sys2 = ufloat(1, 0.04, 'sys') 
>>> z = 10*sys2 #but now the error source has a larger inpact on Z
>>> y+z # and unfortunately this in not handled properly (should be 20+/-0.6)
20.0+/-0.44721359549995798

So, how do I fully correlate (positively and negatively) two variables? Of course the problem is a lot trickier than this mock-up exaple, and has some error sources that correlate between variables and others that don't.

Let me know if I did not explain myself properly and of course if you have a solution :).

Thank you

@apuignav
Copy link

apuignav commented Apr 8, 2014

Ciao Mauro,

I think this is not working because you're using relative errors, and therefore saying that one error has a larger impact breaks the 100% correlation. I think you may need to use a summed error

Therefore, for the second consideration, you probably need to use absolute errors (this is a simple example, but I think it may be the proper way to attack the problem):

>>> sys = ufloat(0.0, 0.2)
>>> val_y = 10
>>> y = ufloat(val_y, 0.0) + val_y * sys
>>> val_z = 10
>>> z = ufloat(val_z, 0.0) + val_z * sys * 2 # Two times larger
>>> y + z
20.0+/-0.6

Or something like this...

Albert

@apuignav
Copy link

apuignav commented Apr 8, 2014

Sorry, I forgot the x part

>>> val_x = 5
>>> x = ufloat(val_x, 0.0) + val_x * sys
>>> x + y
15.0+/-0.3
>>> x/y
0.5+/-0
>>> y/x
2.0+/-0

Albert

@lebigot
Copy link
Collaborator

lebigot commented Apr 9, 2014

Mauro, the behavior that you observe is perfectly correct, since each number created with ufloat() is an independent random variable (I updated https://pythonhosted.org/uncertainties/user_guide.html#creating-numbers-with-uncertainties so as to make this clearer).

Albert's approach is a good idea: since you need a single independent variable, there should be a single ufloat() with a non-zero uncertainty. Albert's solution can be made simpler, though: there is no need to use ufloat(val_x, 0.0), as it can be written directly as val_x.

If I were to code what you want, I would do:

>>> from uncertainties import ufloat
>>> percent_error = ufloat(0, 0.01)  # Error of 1 percentage point
>>> x = 5*(1+2*percent_error)  # 2 % error
>>> x
5.0+/-0.1
>>> y = 10*(1+2*percent_error)  # 2 % correlated error
>>> y/x
2.0+/-0
>>> x+y
15.0+/-0.3
>>> z = 10*(1+4*percent_error)  # 4 % correlated error
>>> y+z
20.0+/-0.6

In general, you can create correlated random variables with the correlated_values() function (http://pythonhosted.org/uncertainties/user_guide.html#index-9).

@lebigot lebigot closed this as completed Apr 9, 2014
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants