Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Any idea about this error? #9

Open
Math9999 opened this issue Nov 24, 2019 · 8 comments
Open

Any idea about this error? #9

Math9999 opened this issue Nov 24, 2019 · 8 comments

Comments

@Math9999
Copy link

Hello.

It would be great if you could support.

C:\XYZ\XYZ\RobustSTL.py:54: RuntimeWarning: invalid value encountered in double_scalars
season_value = np.sum(weight_sample * weights)/np.sum(weights)
[!] 2 iteration will strat

Intel MKL ERROR: Parameter 7 was incorrect on entry to DGELS.
Traceback (most recent call last):
File "", line 2, in
File "", line 16, in main
File "C:\XYZ\XYZ\RobustSTL.py", line 121, in RobustSTL
return _RobustSTL(input, season_len, reg1, reg2, K, H, dn1, dn2, ds1, ds2)
File "C:\XYZ\XYZ\RobustSTL.py", line 97, in _RobustSTL
trend_extraction(denoise_sample, season_len, reg1, reg2)
File "C:\XYZ\XYZ\RobustSTL.py", line 36, in trend_extraction
delta_trends = l1(P,q)
File "C:\XYZ\XYZ\l1.py", line 41, in l1
lapack.gels(+P, uls)
ValueError: -7

All the best

A.B.

@LeeDoYup
Copy link
Owner

LeeDoYup commented Feb 6, 2020

did you check whether the weights contains np.inf, NaN ?
Another possibility of the error is when np.sum(weights)=0

@anirudhasundaresan
Copy link

anirudhasundaresan commented Mar 3, 2020

@LeeDoYup I also get the same error. Yes, you are right, the weights are close to 0.

If there is a huge level shift from one time period to the next, i.e. |y_j - y_t| is large, then the denoising through bilateral filters involves computing the product of terms (which are close to 0. because of exponential factors). This causes the weights to go to zero. This problem is also amplified if the time period set is high. Do you have a workaround for dealing with this issue?

@LeeDoYup
Copy link
Owner

LeeDoYup commented Mar 4, 2020

@anirudhasundaresan I think the problem is from the algorithm itself... (I am not the author, but i implemented it).
Nowadays, i am busy for other submission (until next week).

I will look around the issue after i finish my work.
If you solve the problem before i start, please make a pull request !

@chuckcoleman
Copy link

I've encountered a similar error, probably for the same reason:

main:54: RuntimeWarning: invalid value encountered in double_scalars
Traceback (most recent call last):
File "", line 1, in
RobustSTL(y,12)
File "/Users/Common 1/SA/RobustSTL/RobustSTL/RobustSTL.py", line 121, in RobustSTL
return _RobustSTL(input, season_len, reg1, reg2, K, H, dn1, dn2, ds1, ds2)
File "/Users/Common 1/SA/RobustSTL/RobustSTL/RobustSTL.py", line 97, in _RobustSTL
trend_extraction(denoise_sample, season_len, reg1, reg2)
File "/Users/Common 1/SA/RobustSTL/RobustSTL/RobustSTL.py", line 36, in trend_extraction
delta_trends = l1(P,q)
File "/Users/Common 1/SA/RobustSTL/RobustSTL/l1.py", line 56, in l1
primalstart={'x': x0, 's': s0}, dualstart={'z': z0})
File "/Users/Shared/anaconda3/lib/python3.7/site-packages/cvxopt/coneprog.py", line 1033, in conelp
W = misc.compute_scaling(s, z, lmbda, dims, mnl = 0)
File "/Users/Shared/anaconda3/lib/python3.7/site-packages/cvxopt/misc.py", line 285, in compute_scaling
W['d'] = base.sqrt( base.div( s[mnl:mnl+m], z[mnl:mnl+m] ))

ValueError: domain error

I printed the value of s in coneprog.py just before line 1033 and found that it was all nan's.

@salman087
Copy link

I am getting the same error for most of the time series. Does anyone has solved this or any idea?
image

@LeeDoYup
Copy link
Owner

I think the error comes from l1 optimizer, which is a part of cvxopt library.
I didn't manually implement the l1.py, but use the part of cvxopt.
Can you debug which values have -7?

@SeungHyunAhn
Copy link

SeungHyunAhn commented Nov 17, 2020

I'm not sure, but I think when sol['x'] is NonType return sol['y'][:n] in 'l1.py' line 57.
image

@david-waterworth
Copy link

david-waterworth commented Jun 28, 2022

I'm seeing this as well, and (for me at least) the root cause is the weights returned by bilateral_filter all go to zero causing nan's on the following line due to divide by zero

season_value = np.sum(weight_sample * weights)/np.sum(weights)

Edit: So the issue in my case was fixed by setting the bilateral_filter hyper-parameters, in particular ds2. Since the denominator is squared large values of | y_j - y_t | cause nan's so ds2 is required to scale this back

I have energy consumption data with daily and weekly seasonality - if I set T to daily (48 samples) then this issue occurs since there's a large difference between the weekday and weekend level at the same time of the day. I think there are other issues - if I train on a largish dataset in jupyter the kernel crashes

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants