You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Error in t(c(1, -thresh[i] + thresh[1])) %*% gpdu$vcov :
requires numeric/complex matrix/vector arguments
In addition: Warning message:
In gp.fit(xdat = na.omit(as.vector(xdat)), threshold = threshold, :
Cannot calculate standard error based on observed information
This error is caused by unscaled features (approximately 10e9); the numerical tolerance is too small, leading to lack of convergence in the optimizer and warning/failure of the routine.
Perhaps it would make sense to scale data first before computing and using location-scale properties to give back the estimates.
The text was updated successfully, but these errors were encountered:
It is easy to scale threshold exceedances / do a location/scale normalization of the maxima, but must then back-transform the output before computing standard errors, log-likelihood, etc.
This would require some checks to make sure it indeed improves the optimization in fit.gpd, etc.
The problem is reported in a particular function that uses it's own routines (it's now possible to fix parameters with the latest versions), but none of the profiling functions does this).
Reported by John Ery.
This error is caused by unscaled features (approximately
10e9
); the numerical tolerance is too small, leading to lack of convergence in the optimizer and warning/failure of the routine.Perhaps it would make sense to scale data first before computing and using location-scale properties to give back the estimates.
The text was updated successfully, but these errors were encountered: