You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've spent far too much time trying to debug the LNF model. Trouble is, it's unstable for some inputs. Example:
As the following script shows, the model is only stable for ~0.06 < sigma < ~3.5. Anything outside this range will cause the dreaded Desired error not necessarily achieved due to precision loss in minimize.
from lifelines import LogNormalFitter
from lifelines.utils import ConvergenceError
MU = np.linspace(-10, 10, 5)
SIGMA_ = np.linspace(0.0001, 6, 25)
R = np.zeros((5, 25))
for i, mu_ in enumerate(MU):
for j, sigma_ in enumerate(SIGMA_):
try:
N = 20000
print(mu_)
print(sigma_)
X, C = np.exp(sigma_ * np.random.randn(N) + mu_), np.exp(np.random.randn(N) + mu_)
E = X <= C
T = np.minimum(X, C)
LogNormalFitter().fit(T, E)
R[i, j] = 1
except ConvergenceError:
R[i, j] = 0
plt.matshow(R)
plt.xticks(np.arange(25), SIGMA_)
plt.yticks(np.arange(5), MU)
AFAIK, the log-likelihood and the gradients are computed correctly, though it would be useful to have a second set of eyes on them. Even scipy's check_gradients seems to confirm:
When there is no censorship, the model converges for all values...
Adding a penalizer doesn't seem to help.
When I power transform the durations by the inverse standard deviation of the log(T), this seems to help convergence - however I can't get back the original parameters.
Not specifying the gradient function, jac seems to help! That is:
I've spent far too much time trying to debug the LNF model. Trouble is, it's unstable for some inputs. Example:
Desired error not necessarily achieved due to precision loss
inminimize
.check_gradients
seems to confirm:When there is no censorship, the model converges for all values...
Adding a penalizer doesn't seem to help.
When I power transform the durations by the inverse standard deviation of the log(T), this seems to help convergence - however I can't get back the original parameters.
Not specifying the gradient function,
jac
seems to help! That is:converges, but
seems to fail
The text was updated successfully, but these errors were encountered: