-
Notifications
You must be signed in to change notification settings - Fork 87
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
High order gPC estimators derived from non-standard distributions are unstable #36
Comments
I have just confirmed that the problem occurs also for |
It works for me if I use a rv with a mean of 0. I haven't used chaospy much, but generally high-ish order polynomials work better with mean-0, standard-deviation-1 abscissae. If you can afford to, work in such a space, and then translate any results back to the original space as needed by an appropriate (scaling and) shifting. Unless this capability is supposed to be built-in to chaospy, which would make sense. |
As @tsbertalan said. Try to parameterize the problem in terms of standard distributions when going to higher order. Polynomial chaos expansions allows you to address the problem trough a proxy variable:
This approximation works well as long as the mapping between |
I just want to say for now, thanks for your feedback and ideas. Both suggestions look like exactly what I was looking for. I will leave it open for now at least before I get the time to properly try it all out. Sorry for late responses. |
Right, I finally sat down and tested it out. Briefly, proxy method rocks. I focused on a variance and used the code below. I take that that the So clearly this is the property of the method. Is there a place in the literature where this has been described? def f(x):
return x**3
def direct(rv, order=4):
nodes, weights = generate_quadrature(order, rv, rule='G')
evals = f(nodes[0])
Ps, norms = orth_ttr(order, rv, normed=False, retall=True)
u_hat = fit_quadrature(Ps, nodes, weights, evals, norms=norms)
return Var(u_hat, rv)
def proxy(rv, order=4):
rv_proxy = Normal(0, 1)
nodes, weights = generate_quadrature(order, rv_proxy, rule='G')
evals = f(rv.inv(rv_proxy.fwd(nodes[0])))
Ps, norms = orth_ttr(order, rv_proxy, normed=False, retall=True)
u_hat = fit_quadrature(Ps, nodes, weights, evals, norms=norms)
return Var(u_hat, rv_proxy)
rv = Normal(10, 0.1)
orders = arange(1, 11)
proxy_variance = array([proxy(rv, o) for o in orders])
direct_variance = array([direct(rv, o) for o in orders]) |
That is correct, those are Rosenblatt transformations. Rosenblatt is usually discussed in the context of transforming from and to stochastically independent variables, but the rules are exactly the same applied to achieve stability in higher orders. There should be something more specific to what you are asking out there, but unfortunately I don't remember what they are. If you want to read up on the transformation itself, then the guys who made the DAKOTA software library discusses the topic here: |
I consider the issue as resolved. If you consider that not to be the case, or if there are other problems related to the same issue, feel free to re-open the issue. |
Yes, Jonathan. Thanks a lot for your help here - the discussion was very useful to me. I should closed it before actually. |
Hello again,
I just want to share a piece of code and discuss the problem. I believe this is not actually a bug but rather a property of the method, but I am not completely sure.
Basically the code below is producing very high values for the coefficients in the polynomial. This, I believe, results in
nan
s in the estimation of standard deviation. The problem is a function of order and the parameters of the original distribution.Can you make a suggestion on what may be causing the issue and how to verify it?
The text was updated successfully, but these errors were encountered: