Skip to content

roshancharlie/Telecom-Churn-Classification-Model

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 

Repository files navigation

Telecom Churn Classification Model

Introduction

The Aim of this project is to utilize customer history and personal details to build a classification model that can accurately predict which customers are likely to leave the company.

Libraries Used

  1. Pandas: for data manipulation and analysis
  2. Numpy: for numerical computing and array manipulation
  3. Plotly: for interactive data visualization
  4. Matplotlib: for static data visualization
  5. Seaborn: for statistical data visualization
  6. Scikit-learn (sklearn): for machine learning modeling and evaluation
  7. Open datasets: for accessing publicly available datasets for training and testing the model.

Data Collection and Exploration

I imported the telecom churn dataset using the open datasets module. Upon importing the data, I performed an initial exploration to gain insights into the data. This involved checking the information and statistics of the data, such as the number of rows and columns, the data types of the columns, and the presence of missing values. This initial exploration helped me get a better understanding of the data and enabled me to plan the next steps of the project, such as data cleaning, feature engineering, and modeling.

Exploratory Data Analysis

Data visualization was performed to gain insights into the relationships between various factors and customer churn. Key factors affecting churn rate were identified through the analysis of the behavior of churn and non-churn customers. The insights gained from the visualizations aided in determining the most important features for building the predictive model and effectively communicating the results to stakeholders.

Feature Selection

Feature selection was performed based on the behavioral analysis and the collinearity between the features and the target variable (churn). Irrelevant columns were dropped and the most relevant columns were selected to build the predictive model. The selected features were then used to train the model and improve its accuracy. By considering the relationships between the features and the target variable, the model was made more effective and better results were achieved.

Model Training and Evaluation

The dataset was divided into three parts: training data, test data, and validation data. Different classification algorithms, including logistic regression, random forest, decision tree, and gradient boost, were applied to the validation data. The accuracy and classification report were evaluated for each model, and the best performing model was selected for further analysis. The results of the selected model were then compared to the actual values of the churn, and a confusion matrix was plotted to visualize the predictions. This process allowed me to compare the performance of different algorithms and determine which one was the best fit for the data.

Decision Tree is the best model with accuracy: 0.9999047928848549

Connect with me

Gmail LinkedIn Instagram HackerRank Github logo

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published