In this section, we will examine classification algorithms in machine learning.
- Logistic Regression
📌 The aim is to establish a linear model for the classification problem that describes the relationship between dependent and independent variables.
- Naive Bayes
📌 It is a probability-based modeling technique. The aim is to calculate the probability that a particular sample belongs to each class, based on conditional probability.
- k-Nearest Neighbors(KNN)
📌 Predicts are made based on observation similarity
- Support Vector Classification(SVC) ===> Linear & RBF
📌 The goal is to find the hyperplane that will allow the separation between the two classes to be optimal.
- Artificial Neural Network(ANN)
📌 It is one of the powerful machine learning algorithms that can be used for classification and regression problems that refer to the way the human brain processes information
- Classification and Regression Trees(CART)
📌 The aim is to transform the complex structures in the data set into simple decision structures.
- Random Forests(RF)
📌 It is based on the evaluation of the predictions produced by multiple decision trees.
- Gradient Boosting Machines(GBM)
📌 It is a generalized version of AdaBoost that can be easily adapted to classification and regression problems. A series of models in the form of a single predictive model are constructed on the residuals.
- Extreme Gradient Boosting(XGBoost)
📌 XGBoost is optimized to increase the speed and prediction performance of GBM. It is scalable and can be integrated into different platforms.
- LightGBM
📌 LightGBM is another type of GBM developed to increase the training time performance of XGBoost.
- Category Boosting(CatBoost)
📌 It is another fast, successful type of GBM that can automatically deal with categorical variables.