Skip to content

jwcalder/GraphLearningOld

Repository files navigation

Graph-based Clustering and Semi-Supervised Learning

Clustering

This python package is devoted to efficient implementations of modern graph-based learning algorithms for both semi-supervised learning and clustering. The package implements many popular datasets (currently MNIST, FashionMNIST, cifar-10, and WEBKB) in a way that makes it simple for users to test out new algorithms and rapidly compare against existing methods.

This package reproduces experiments from the paper

Calder, Cook, Thorpe, Slepcev. Poisson Learning: Graph Based Semi-Supervised Learning at Very Low Label Rates., Proceedings of the 37th International Conference on Machine Learning, PMLR 119:1306-1316, 2020.

Newer version

This is the old version of graphlearning (v0.0.3). The new version is here.

Installation

Install with

pip install graphlearning==0.0.3

Required packages include numpy, scipy, sklearn, matplotlib, and torch. The packages annoy and kymatio are required for running nearest neighbor searches and the scattering transform, respectively, but the rest of the code will run fine without those packages. These dependencies should install automatically.

To install from the github source, which is updated more frequently, run

git clone https://github.com/jwcalder/GraphLearningOld
cd GraphLearningOld
python setup.py install --user

If you prefer to use ssh swap the first line with

git clone git@github.com:jwcalder/GraphLearningOld.git

Getting started with basic experiments

Below we outline some basic ways the package can be used. The examples page from our GitHub repository contains several detailed example scripts that are useful for getting started.

A basic experiment comparing Laplace learning/Label propagation to Poisson learning on MNIST can be run with

import graphlearning as gl
gl.ssl_trials(dataset='mnist',metric='vae',algorithm='laplace',k=10,t=10)
gl.ssl_trials(dataset='mnist',metric='vae',algorithm='poisson',k=10,t=10)

Supported datasets include MNIST, FashionMNIST, WEBKB, and cifar. The metric is used for constructing the graph, and can be 'raw' for all datasets, which is Euclidean distance between raw data, 'vae' for MNIST and FashionMNIST, which is the variational autoencoder weights as described in our paper, 'scatter', which uses the scattering transform, or 'aet' for cifar, which uses the AutoEncoding Transformations weights, also described in our paper. The 'k=10' specifies how many nearest neighbors to use in constructing the graph, and 't=10' specifies how many trials to run, randomly assigning training/testing data. There are many other optional arguments, and full documentation is coming soon.

Below is a list of currently supported algorithms with links to the corresponding papers.

Semi-supervised learning: Laplace, RandomWalk, Poisson, PoissonMBO, pLaplace, WNLL, ProperlyWeighted, NearestNeighbor, MBO, ModularityMBO, VolumeMBO, DynamicLabelPropagation, SparseLabelPropagation, CenteredKernel

Clustering: INCRES, Spectral, SpectralShiMalik, SpectralNgJordanWeiss

The algorithm names are case-insensitive in all scripts. NearestNeighbor chooses the label of the closest labeled node in the geodesic graph distance.

The accuracy scores are saved in the subdirectory Results/ using a separate .csv file for each experiment. These can be loaded to generate plots and tables (see the example scripts). The directory ResultsFromPaper/ contains all results from our ICML paper.

The commands shown above are rather high level, and can be split into several important subroutines when needed. The code below shows how to generate a weight matrix on the MNIST dataset, choose training data randomly, run Laplace and Poisson learning, and compute accuracy scores.

import graphlearning as gl

#Load labels, knndata, an build 10-nearest neighbor weight matrix
labels = gl.load_labels('mnist')
W = gl.knn_weight_matrix(10,dataset='mnist',metric='vae')

#Randomly chose training datapoints
num_train_per_class = 1 
train_ind = gl.randomize_labels(labels, num_train_per_class)
train_labels = labels[train_ind]

#Run Laplace and Poisson learning
labels_laplace = gl.graph_ssl(W,train_ind,train_labels,algorithm='laplace')
labels_poisson = gl.graph_ssl(W,train_ind,train_labels,algorithm='poisson')

#Compute and print accuracy
print('Laplace learning: %.2f%%'%gl.accuracy(labels,labels_laplace,len(train_ind)))
print('Poisson learning: %.2f%%'%gl.accuracy(labels,labels_poisson,len(train_ind)))

Contact and questions

Email jwcalder@umn.edu with any questions or comments.

Acknowledgments

Several people have contributed to the development of this software:

  1. Mauricio Rios Flores (Machine Learning Researcher, Amazon)
  2. Brendan Cook (PhD Candidate in Mathematics, University of Minnesota)
  3. Matt Jacobs (Postdoc, UCLA)
  4. Mahmood Ettehad (Postdoc, IMA)

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published