Skip to content
Exploratory code to see if we can learn about feature relationships in a DataFrame using machine learning
Branch: master
Clone or download
Ian Ozsvald
Ian Ozsvald minor updates
Latest commit af09089 Jan 11, 2019
Type Name Latest commit message Commit time
Failed to load latest commit information.
examples minor updates Jan 10, 2019
.gitignore setup updates Jan 10, 2019
LICENSE minor updates Jan 10, 2019 updates Jan 10, 2019 moves for Jan 10, 2019


Attempt to discover 1D relationships between all columns in a DataFrame using scikit-learn (RandomForests) and standard correlation tests (Pearson, Spearman and Kendall via Pandas).

The goal is to see if we can better understand the data in a DataFrame by learning which features (1 column at a time) predict each other column. This code attempts to learn a predictive relationship between the Cartesian product (all pairs) of all columns.

Rather than just learning which column(s) predict a target column, we might want to know what other relationships exist (e.g. during Exploratory Data Analysis) and whether some predictive features are driven by other less-predictive features (to help us find new & better features or data sources). We might also sense-check out data by checking that certain relationships exist.

By default it assumes every target column is a regression challenge. You can provide a list of columns to treat as classification challenges. For regression we cap negative scores at 0 (r^2 can be arbitrarily negative, we cap at 0 to make this a little easier to interpret).

Text-encoded columns are automatically LabelEncoded (this is a sensible default but may not reveal information in your case, you might need to provide your own smarter encoding). This adds to the correlation plots in YellowBrick and Pandas Profiling where the text columns are not auto-encoded.

We might want to use this tool alongside:

The project (and the examples) live on GitHub:

Titanic example

Titanic Notebook

  • Embarked (classification) is predicted well by Fare, also by Age
  • Pclass (regression) is predicted by Fare but Fare (regression) is poorly predicted by Pclass
  • Sex (classification) is predicted well by Survived
  • Survived (classification) is predicted well by Sex, Fare, Pclass, SibSpParch
    • Predicting this feature at circa 0.62 is equivalent to "no information" as 0.62 is the mean of Survived
  • SibSpParch is predicted by both SibSp and Parch (SibSpParch is the sum of both - it is an engineered additional feature) - it is also predicted by Fare
  • SibSp and Parch are also predicted by Fare (but less well so than by SibSpParch)

alt text

This is generated using:

df = pd.read_csv("titanic_train.csv")

import discover
df_results =, classifier_overrides, df)

df_results.pivot(index='target', columns='feature', values='score').fillna(1) \
.style.background_gradient(cmap="viridis", low=0.3, high=0.0, axis=1) \

Boston example

Boston Notebook

  • NOX predicts AGE and DIS (but not the other way around)
  • target predicts LSTAT, LSTAT weakly predicts target, LSTAT weakly predicts RM
  • DIS predicts AGE, AGE weakly predicts DIS
  • INDUS predicts CRIM and somewhat AGE, B
  • target weakly predicts RM, RM weakly predicts target


  • python 3.6+
  • scikit-learn (0.19+)
  • pandas
  • jupyter notebook
  • matplotlib
  • seaborn
conda install scikit-learn pandas jupyter pytest seaborn
conda install -c conda-forge watermark


Install from PyPI

pip install discover_feature_relationships

Install from source

First check-out from GitHub, then install with python install, then cd into the examples folder and run the Notebooks.


  • Run for a simple test that the code is working
  • Run pytest to run for a single unit test (use pytest -s to see print outputs)

Note to Ian for Development

Environment: . ~/anaconda3/bin/activate discover_feature_relationships


To push to PyPI I need to follow - specifically python se sdist bdist_wheel and twine upload dist/*. This uses .


You can’t perform that action at this time.