Skip to content

Python implementations of Decision Trees, Random Forest, and AdaBoost on the Titanic dataset for classification tasks.

Notifications You must be signed in to change notification settings

NDK22/Machine-Learning-Ensemble-Methods

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 

Repository files navigation

Machine Learning Ensemble Methods

This repository contains Python code implementing three machine learning ensemble methods: Decision Trees, Random Forest, and AdaBoost. These methods are applied to the Titanic dataset for classification tasks.

Decision Tree Classifier

  • Implementation of a Decision Tree Classifier with options for specifying the maximum depth and splitting criterion (Gini impurity).
  • Accuracy achieved on the Titanic dataset: 86.49%

Random Forest Classifier

  • Implementation of a Random Forest Classifier using Decision Trees as base classifiers.
  • Features include controlling the maximum depth, splitting criterion, and the number of trees in the forest.
  • Accuracy achieved on the Titanic dataset: 89.49%

AdaBoost Classifier

  • Implementation of an AdaBoost Classifier with Decision Trees as weak learners.
  • The number of learners, learning rate, maximum depth, and splitting criterion are configurable.
  • Accuracy achieved on the Titanic dataset: 75.68%

Dataset Preparation

The Titanic dataset is used for these experiments. It undergoes preprocessing steps, including one-hot encoding of categorical features.

Usage

You can use the provided Python scripts to train and evaluate the models. Make sure you have the required libraries (NumPy, Pandas, Seaborn, Scikit-Learn) installed.

pip install numpy pandas seaborn scikit-learn

About

Python implementations of Decision Trees, Random Forest, and AdaBoost on the Titanic dataset for classification tasks.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages