GOAL
Develop a model to classify resumes into predefined categories. Note: Text classification is an example of supervised machine learning since we train the model with labeled data.
DATASET
The dataset used for this project is available in CSV format with 963 rows and 2 columns. You can access the dataset here.
INTEL oneDAL LIBRARY
To enhance the performance and efficiency of our model, we utilized the Intel oneDAL (one Data Analytics Library). oneDAL is a powerful library developed by Intel for high-performance data analytics and machine learning tasks. It provides a range of optimized algorithms and tools that significantly accelerate the processing of data.
STEPS TAKEN
All the required libraries and packages, including Intel oneDAL, were imported, and then the required dataset for the project was loaded.
EDA was carried out to visualize various parameters and the most correlated unigrams and bigrams.
Data was cleaned, also known as Text Preprocessing. Text Preprocessing was done using the re function of Python and the NLTK library, which is used for NLP-based models.
Model building was then implemented using different algorithms. We employed nine different models to train and evaluate the results, leveraging the power of the Intel oneDAL library for efficient computation.
TEXT PREPROCESSING
The text needed to be transformed into vectors so that the algorithms would be able to make predictions. In this case, the Term Frequency – Inverse Document Frequency (TFIDF) weight was used to evaluate how important a word is to a document in a collection of documents.
After removing punctuation and lowercasing the words, the importance of a word was determined in terms of its frequency, with the assistance of Intel oneDAL.
TF-IDF is a measure of the originality of a word.
TF is the number of times a term appears in a particular document.
IDF is a measure of how common or rare a term is across the entire corpus of documents.
MODELS USED
The classification models used are:
- K Nearest Neighbor
- Dummy Classifier
- Linear Support Vector Classifier
- Stochastic Gradient Descent
- Random Forest
- Decision Tree
- Multinomial Naive Bayes Classifier
- Gradient Boost
- AdaBoost
LIBRARIES REQUIRED
- Pandas: for data analysis
- Numpy: for data analysis
- Matplotlib: for data visualization
- Seaborn: for data visualization
- Scikit-learn: for data analysis
- Intel oneDAL: for enhanced performance and efficiency
VISUALIZATION
By viewing Confusion Matrix it is easily deduced that SGD model is the best model for this project.
ACCURACIES
Model | Architecture | Accuracy in % (on testing data) |
---|---|---|
Model 1 | K Nearest Neighbor Model | 97.92 |
Model 2 | Dummy Classifier Model | 9.84 |
Model 3 | Linear Support Vector Model | 100.00 |
Model 4 | Stochastic Gradient Descent Model | 100.00 |
Model 5 | Random Forest Classifier Model | 100.00 |
Model 6 | Decision Tree Classifier Model | 100.00 |
Model 7 | Multinomial Naive Bayes Model | 96.37 |
Model 8 | Gradient Boost Classifier Model | 100.00 |
Model 9 | AdaBoost Model | 30.05 |
CONCLUSION
The most successful model was found to be the Stochastic Gradient Descent Classifier for role classification based on resumes, with the significant support of the Intel oneDAL library to achieve efficient computation and optimization.