Replies: 4 comments 1 reply
-
so when modle training time much less for lightgbm ( I used cpu runs) |
Beta Was this translation helpful? Give feedback.
-
@Sandy4321 Thank you for checkout the library... And I think you might be reading too much into the tutorial. Let me try to address your points:
I hope I have answered your questions. Let me know if you have more.. |
Beta Was this translation helpful? Give feedback.
-
thanks for soon replay theoretically it is obvious GBM can not calculate confidence scores do you have some evidence that PyTorch Tabular is better then generally used ML algorithms ? 3 thanks for taking care |
Beta Was this translation helpful? Give feedback.
-
Describe the bug
it is conceptually mistake to use log loss for performance comparison
since
1
lightgbm as all gmb models
( maybe except https://stanfordmlgroup.github.io/projects/ngboost/ NGBoost: Natural Gradient Boosting for Probabilistic Prediction
Tony Duan*, Anand Avati*, Daisy Yi Ding, Sanjay Basu, Andrew Ng, Alejandro Schuler )
are not good in prediction probabilities
2
then same should be carefully tested for deep NN
3
may you change provement that your code is performing , for example
https://github.com/manujosephv/pytorch_tabular/blob/main/examples/PyTorch%20Tabular%20with%20Bank%20Marketing%20Dataset.ipynb
to compare actual confusion matrices pls
I did some attempt
preds_GATE_Full = tabular_model.predict(test)
but what is the way to calculate ground truth predictions from your models
4
where is description of used algorithms , especially for categorical data values preprocessing ?
Beta Was this translation helpful? Give feedback.
All reactions