Skip to content

Work on combining Logit model with an information granulation method for better interpretability

Notifications You must be signed in to change notification settings

goz1985/RST-ARM-GLM_-Research

Repository files navigation

RST-ARM-GLM_-Research


Machine learning interpretability is an important concept in the world of data analytics. How can we be able to trust the decisions made on our data by the models we employ on them? The research seeks to use Rough Sets, an information granulation and feature selection method, to sift through attributes of any dataset, then model the selected attributes using the apriori association rule mining method to find objectives rule from the selected attributes. The main intention is to make the user see the kind of attribute the model is choosing for its prediction purposes. The research starts off with loading the datasets and preprocessing them to ensure that the data is clean and free from errors or omissions. Preprocessing was the main hurdle in this research, especially the Kirogo datasets with 30-minute interval weather reading. The structure of the datasets are as follows: Kirogo attributes were: Date/Time; RH(%), Temp(Degrees Celsius), Rainfall Kariki attributes: Date, Min Temp, Avg Temp, Max Temp, Min humidity, Max humidity, Avg humidity, wind speed, precipitation amount and rain factor.

Next, I run sample machine learning models on the data set to get insights into the data. This is important as it will form the basis for comparing my proposed model performance against other standard machine learning models. However, the main purpose of the research is to show the link between interpretability via decision rules and weights used in a Logit model for the Generalized linear model.

The project starts with preprocessing. This is important to prepare the data for modelling with our chosen models. Also involved during this process will be an exploratory analysis of the data. This is necessary so that we can see the various statistical distribution aspects of the data before making predictions about it.

This is followed by getting the required features for modelling the data via the Rough Set feature selection method using the greedy heuristic method. Feature selection is key in machine learning interpretability because only important features will be selected through it, making interpretability a bit easier.

Next, I run sample machine learning models on the data set to get insights into the data. This is important as it will form the basis for comparing my proposed model performance against other standard machine learning models. However, the main purpose of the research is to show the link between interpretability via decision rules and weights used in the Logit model for the Generalized linear model.

Secondly, from the deduced feature subset obtained by the Rough Set method in step one above, these features will be passed through the association rule method to derive decision rules. The selection of rules will be based on the confidence and support metrics used to evaluate how good a rule is on the data it is operating on.

After deducing the decision rules, a data frame containing the rules will be formulated, and then the Logit model for classification will be applied to this rule data frame. The rule data frame will contain binary values representing the decision rules chosen in step two above.

The main work in this research will be finding the optimal rules, generating binary values from these rules, coercing them into a data frame and running these rules on the Logit model. The consideration for using the multinomial model is also highly encouraged and will be done to test the viability of my proposed framework. The framework seeks to show how Rough Set theory can be used as a feature selection method for association rule mining methods. The research also seeks to show how interaction term detection is done at the local level with the Rough Set model without prior knowledge, making good interpretability metrics for the GLM model.GLM interpretability is through the interpretation of the weights of the features in the data it's working on. However, GLM assumes that interactions among the features are all similar regardless of the values of other features(C. Molnar, 2019), which is not true, as there are countless feature interactions in the data.

The logit model will use the decision rule data frame to model the decision variables based on the data set used. These decision rules will be generated using the association rule mining method. Here, the main work will be tuning the model to get the optimal rules with high support and confidence that can be used for weather prediction.

The published paper for this work is found here: https://www.sciencedirect.com/science/article/abs/pii/S0957417423015944#:~:text=To%20counter%20this%20limitation%2C%20we,terms%20using%20association%20rule%20mining.

About

Work on combining Logit model with an information granulation method for better interpretability

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages