Skip to content
A Naive Bayes classifier that uses an undirected graph to help with multi-label classification. Developed as a class project for Stanford's CS229 Machine Learning class.
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Type Name Latest commit message Commit time
Failed to load latest commit information.


The Network Guided Naive Bayes (NGNB) classifier is a multi-label classifier that uses an undirected graph built from the training set labels to to help with prediction. I developed NGNB as a class project for Stanford's CS229 Machine Learning class.

Also included in this repository are implementations of a Binary Relevance Multinomial Naive Bayes classifier and Parameteric Mixture Models Naive Bayes classifier. These models served as baselines for evaluating effectiveness and efficiency of NGNB.

Overall, NGNB did well when evaluated using the data set described in the next session. It achieved F1 scores similar to PMM1 with efficiencies similar to the binary Multinomial models. See the associated paper for more details.


The following example illustrates how to use the code in this repository. The code and supporting scripts was written for use with the StackExchange dataset provided in the Kaggle Facebook Recruiting Challenge III competition.

Since the original data set was very, very large, a smaller training set was created using the following steps.

  1. Create an edge list for the first 200K training examples.

    python src/ data/Train.csv data/edgelist200K -n 200000

  2. Extract a subset of labels and create a network diagram of their relationships.

    Rscript src/analyzeTagNetwork.R data/edgelist200K.csv data/selectedtags.txt tags_netdiag.pdf

  3. Create a reduced training set containing only the selected tags.

    python src/ data/Train.csv data/TrainReduced.csv data/selectedtags.txt -n 100000

  4. Process the training examples and save as pickled Python dictionaries.

    python src/ data/TrainReduced.csv data/selectedtags.txt data/

  5. Remove uninformative words using within-class-popularity and GINI criteria.

    python src/ gini data/ data/ -k 0.25

A test data set was also created as follows,

  1. Run createFilteredDataset to make smaller training set

    python src/ data/Train.csv data/TestReduced.csv data/selectedtags.txt -n 15000 -s 110000

  2. Run createDictionaries to get words associated with each tag

    python src/ data/TestReduced.csv data/selectedtags.txt data/

The three models were evaluated using 10-fold cross validation by running the following commands

python src/ nbMulti data/ -k 10 -notraintest 
python src/ pmm1    data/ -k 10 -notraintest 
python src/ ngnb    data/ -k 10 -notraintest 

Sensitivity to the number of tags was evaluated by running 10-fold validation with randomly selected subsets of tags using the simple shell script,


for model in nbMulti pmm1 ngnb
for tsize in 5 10 20 50
  python src/ ${model} data/ -k 10 -notraintest -ntag ${tsize} -l 50000

The following commands were used to evaluate the models using the test data set,

python src/ nbMulti  data/ data/
python src/ pmm1     data/ data/
python src/ ngnb     data/ data/
You can’t perform that action at this time.