-
Couldn't load subscription status.
- Fork 0
Metrics
Measure of the probability that the estimate is 1 given all samples whose true class label is 1, i.e., how many of the +ve samples were identified as being +ve (predicted & true +ve / all true +ve).
Measure of the probability that the estimate is 0 given all samples whose true class label is 0, i.e., how many of the -ve samples were identified as being -ve (predicted & true -ve / all true -ve)
= 1 - Specificity = (predicted & false -ve / all true -ve)
Measure of the probability that a sample is true positive given that the classifier has said it is positive, how many sample predicted by the classified as positive as actually positive
- X-axis =
False Positive Rate(= 1 -Specificity) - Y-axis =
True Positive Rate
- X-axis =
Precision - Y-axis =
Recall
Use when +ve samples are very small compared to negative samples (highly imbalanced classes in samples)
-
Linear Methods
- Linear Regression
- Logistic Regression
- Stochastic Gradient Descent
-
Shrinkage methods
- Ridge Regression
- LASSO
- LARS
-
Derived Input Methods
- Principal Components Regression
- Partial Least Squares Regression
-
Non-linear methods
- MARS
- Polynomial Regression
- LOESS
- Splines
- Generalized Additive Models (GAMS)
- Isotonic Regression
- Bayes error rate
- k-Nearest-Neighbors
- Linear Discriminant Analysis
- QDA
- Support Vector Machines
Tree-based Methods
Model Selection / Assessment / Resampling
- Bias-Variance trade-off
- BIC
- MDL
- Vapnik-Chervonenkis Dimension
- Cross-validation
- Bootstrap
Clustering
- K-Means
- Gaussian Mixture Models
- Hierarchical Clustering
- Affinity Propagation
- Mean Shift
- Spectral Clustering
- DBSCAN
- Birch
- Power Iteration Clustering
Dimensionality Reduction
- PCA
- SVD
- ICA
- Factor Analysis
Density Estimation
Frequent Pattern Mining
Recommender Systems
NLP