-
Notifications
You must be signed in to change notification settings - Fork 540
Closed
Description
I am trying to use the Wasserstein Discriminant Analysis implementation of POT, shown here https://pythonot.github.io/auto_examples/others/plot_WDA.html
I can reproduce the example in the link above with no problem. However, when I tried to apply the WDA implementation to MNIST, it doesn't complete any iterations and then the process is killed. In the original paper the authors use the method for MNIST, and report a low training time. So, I was wondering, whether this implementation is known not to work for larger scale data, or if I am missing something.
Code to reproduce below:
import torch
import torchvision
# Download and load training and test datasets
trainset = torchvision.datasets.MNIST(root='./data', train=True, download=True)
# Scale data and subtract global mean
def scale_and_center(x_train):
std = x_train.std()
x_train = x_train / (std * n_row)
global_mean = x_train.mean(axis=0, keepdims=True)
x_train = x_train - global_mean
return x_train
n_samples, n_row, n_col = trainset.data.shape
n_dim = n_row * n_col
x_train = trainset.data.reshape(-1, n_dim).float()
y_train = trainset.targets
from sklearn.decomposition import PCA
pca = PCA(n_components=6)
pca.fit(x_train)
pca_filters = pca.components_
from ot.dr import wda
Pwda, projwda = wda(x_train.numpy(), y_train.numpy(), p=6, reg=0.01,
P0=pca_filters.T)
Metadata
Metadata
Assignees
Labels
No labels