-
-
Notifications
You must be signed in to change notification settings - Fork 20
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problems with Survival SVMs [discussion] #287
Comments
Heya thanks for raising the issue. To be very honest I've never had success in tuning {survivalsvm} successfully (even outside of this package). It's been buggy for ages and I'm unconvinced by the underlying implementation. Just looking at your code above some quick comments: 1) I'd always recommend using Would you mind experimenting with {survivalsvm} directly and not via mlr3proba to see if the problem persists? |
Hi Raphael, Great to find another person who has found survival SVMs unstable. I wouldn't recommend this learner to anyone unless hyperparameters are hand-picked and no proper tuning is applied (which is, well, not nice). I did some tests with library(survivalsvm)
#> Loading required package: survival
fit = survivalsvm(Surv(time, status) ~ ., data = veteran, type = 'hybrid',
gamma.mu = c(0.76, 0.09), diff.meth = 'makediff3',
kernel = 'poly_kernel')
#> Error in tcrossprod(K, Dc): non-conformable arguments Created on 2022-08-21 by the reprex package (v2.0.1) |
Yup, buggy! I'm going to close the issue here. I don't think we should add a warning to the learner as in reality it will just perform badly and people will choose other learners. You might want to consider opening an issue in survivalsvm though? |
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
Hi,
I made some effort to train and tune survival SVMs in a small dataset. Using a simple autotune example, I found out that the SVM survival learner can either fail (some fault with the optimization solvers I think) or get stuck (training never ends, CPU at 100%). I used a
lrn('surv.kaplan')
as afallback
learner and added alearner$timeout
to deal with these issues but I think that this instability is a bad sign for a learner. These issues mostly relate to the choice oftype
: whenever it's notregression
there is a high chance that you will face such issues (C-indexes are close to0.5
in the example below from using the kaplan estimator). I have seen the SVM learner fail also whentype=regression
(more sparsely).I post the following tuning example here so that others benefit from this investigation. Commenting the
learner$fallback
andlearner$timeout
lines will lead to the issues I mentioned.Created on 2022-08-15 by the reprex package (v2.0.1)
The text was updated successfully, but these errors were encountered: