Model Name | Category | Paper | Notes | Code |
---|---|---|---|---|
Decision Tree (ID3,C4.5,CART) | - | - | Notes | |
Random Forest | bagging | Notes | ||
AdaBoost: Adaptive Boosting | boosting | Notes | Code | |
GBDT: Gradient Boosting Decision Tree | boosting | Notes | Code | |
LightGBM: A Highly Efficient Gradient Boosting Decision Tree | boosting | Paper | Notes | |
XGBoost: A Scalable Tree Boosting System | boosting | Paper | Notes | |
CatBoost: unbiased boosting with categorical features | boosting | Paper | Notes | |
Deep Forest: Towards an Alternative to Deep Neural Networks | - | Paper | Github |
Name | Paper | Notes | Code | Desc |
---|---|---|---|---|
NeuMF | Paper | - | - | a combination of GMF and MLP |
GMF | - | - | - | generalized matrix factorization with embedding |
MLP | - | - | - | multilayer perceptron with embedding |
Wide & Deep | Paper | - | - | embeding categorical and continuous features |
Deep Neural Networks for YouTube Recommendations | Paper | - | - | - |
- H2O AutoML [Code]
- SVM [Notes]
- PCA PCR PLS [Notes]
- MDS
- t-SNE
- Auto Encoder
- Polynomial Regression [Notes]
- Step functions [Notes]
- Regression splines [Notes]
- Local Regression [Notes]
- Generalized additive models [Notes]
- RoBERTa: A Robustly Optimized BERT Pretraining Approach [Paper] [Code]
- ULMFiT: Universal Language Model Fine-tuning for Text Classification [Paper]
- GPT2: Language Models are Unsupervised Multitask Learners [Paper] [Code]
- BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding [Paper]
Name | Paper | Notes | Code | Desc |
---|---|---|---|---|
LSTM | Paper | - | - | Long Short Term Memory |
BiLSTM | - | - | - | Bidirectional Long Short Term Memory |
GRU | - | - | - | Gated Recurrent Unit |
RNN | - | - | - | Recurrent Neural Network |
DeepAR | Paper | - | - | - |
N-BEATS | Paper | - | - | - |
Paper Title | Notes | Desc |
---|---|---|
FFORMA: Feature-based forecast model average | - | 2nd place solution in M4 with 42 TS features |
A hybrid method of exponential smoothing and recurrent neural networks for time series forecasting | - | 1st place solution in M4 with the combination of RNN and ETS |
M5 accuracy competition: Results, findings, and conclusions | - | M5 Summary |
Monash Time Series Forecasting Repository | - | baseline |
- Time Series Ebook [Link]
- Naïve
- Seasonal Naïve
- Simple Exponential Smoothing
- ARIMA
- Moving Averages
- Prophet
- online book: Causal Inference for The Brave and True
- Causalml python package from Uber
- KDD2021-causal ml
- Elo [wiki]
- Glicko [Paper]
- Glicko2 [Code]
- Trueskill: A Bayesian Skill Rating System [Web] [Paper]
- BBT https://github.com/DataWraith/bbt
- ELO-MMR https://github.com/EbTech/Elo-MMR
- SHAP: A Unified Approach to Interpreting Model Predictions [Paper]
- LIME: “Why Should I Trust You?” Explaining the Predictions of Any Classifier [Paper]
- We need to talk about standard splits [Paper]
- Adversarial Validation: solve the problem [Part1] [Part2] [Video] [Code]
- P Value, effect size and power analysis [Notes]
- mlu-explain - nice way to vis ML model https://mlu-explain.github.io/