Skip to content

Commit

Permalink
Update docs
Browse files Browse the repository at this point in the history
  • Loading branch information
clwarrior committed Dec 20, 2021
1 parent e48fa2e commit ba163f2
Show file tree
Hide file tree
Showing 46 changed files with 2,474 additions and 2,694 deletions.
20 changes: 10 additions & 10 deletions _downloads/079cff6d3f8081d2ce2930e705ee261b/plot_grid.ipynb

Large diffs are not rendered by default.

Binary file not shown.
Expand Up @@ -15,7 +15,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"\n\n# Example: Use of MRC with different settings\n\nExample of using MRC with some of the common classification datasets with different\nlosses and feature mappings settings. We load the different datasets and use 10-Fold \nCross-Validation to generate the partitions for train and test. We separate 1 partition\neach time for testing and use the others for training. On each iteration we calculate\nthe classification error as well as the upper and lower bounds for the error. We also\ncalculate the mean training time.\n\nYou can check a more elaborated example in `ex_comp`.\n"
"\n\n# Example: Use of MRC with different settings\n\nExample of using MRC with some of the common classification datasets with\ndifferent losses and feature mappings settings. We load the different datasets\nand use 10-Fold Cross-Validation to generate the partitions for train and test.\nWe separate 1 partition each time for testing and use the others for training.\nOn each iteration we calculate the classification error as well as the upper\nand lower bounds for the error. We also\ncalculate the mean training time.\n\nYou can check a more elaborated example in `ex_comp`.\n"
]
},
{
Expand Down
Binary file not shown.
46 changes: 23 additions & 23 deletions _downloads/750531c8bf45b6620f1fa8bc4ab27d0f/plot_comparison.ipynb

Large diffs are not rendered by default.

402 changes: 220 additions & 182 deletions _downloads/785ea22289c8ebe65b117e10a044ee7d/plot_comparison.py

Large diffs are not rendered by default.

111 changes: 0 additions & 111 deletions _downloads/79dda51f6f77b894ece477ed9c8a01d4/example2.py

This file was deleted.

16 changes: 9 additions & 7 deletions _downloads/a8b175cd29751dc0229ceb76947711bf/plot_example2.py
Expand Up @@ -5,10 +5,11 @@
Example: Use of CMRC with different settings
============================================
Example of using CMRC with some of the common classification datasets with different
losses and feature mappings settings. We load the different datasets and use 10-Fold
Cross-Validation to generate the partitions for train and test. We separate 1 partition
each time for testing and use the others for training. On each iteration we calculate
Example of using CMRC with some of the common classification datasets with
different losses and feature mappings settings. We load the different datasets
and use 10-Fold Cross-Validation to generate the partitions for train and test.
We separate 1 partition each time for testing and use the others for training.
On each iteration we calculate
the classification error. We also calculate the mean training time.
You can check a more elaborated example in :ref:`ex_comp`.
Expand All @@ -27,7 +28,7 @@
# Data sets
loaders = [load_mammographic, load_haberman, load_indian_liver,
load_diabetes, load_credit]
dataName = ["mammographic", "haberman", "indian_liver",
dataName = ["mammographic", "haberman", "indian_liver",
"diabetes", "credit"]


Expand Down Expand Up @@ -82,7 +83,7 @@ def runCMRC(phi, loss):

# Save the training time
auxTime += time.time() - startTime

# Predict the class for test instances
y_pred = clf.predict(X_test)

Expand All @@ -102,7 +103,8 @@ def runCMRC(phi, loss):

if __name__ == '__main__':

print('*** Example (CMRC with the additional marginal constraints) *** \n\n')
print('*** Example (CMRC with the additional\
marginal constraints) *** \n\n')

print('1. Using 0-1 loss and relu feature mapping \n\n')
runCMRC(phi='relu', loss='0-1')
Expand Down
Expand Up @@ -15,7 +15,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"\n\n# Example: Use of CMRC with different settings\n\nExample of using CMRC with some of the common classification datasets with different\nlosses and feature mappings settings. We load the different datasets and use 10-Fold \nCross-Validation to generate the partitions for train and test. We separate 1 partition\neach time for testing and use the others for training. On each iteration we calculate \nthe classification error. We also calculate the mean training time.\n\nYou can check a more elaborated example in `ex_comp`.\n"
"\n\n# Example: Use of CMRC with different settings\n\nExample of using CMRC with some of the common classification datasets with\ndifferent losses and feature mappings settings. We load the different datasets\nand use 10-Fold Cross-Validation to generate the partitions for train and test.\nWe separate 1 partition each time for testing and use the others for training.\nOn each iteration we calculate\nthe classification error. We also calculate the mean training time.\n\nYou can check a more elaborated example in `ex_comp`.\n"
]
},
{
Expand All @@ -26,7 +26,7 @@
},
"outputs": [],
"source": [
"import time\n\nimport numpy as np\nfrom sklearn import preprocessing\nfrom sklearn.model_selection import StratifiedKFold\n\nfrom MRCpy import CMRC\n# Import the datasets\nfrom MRCpy.datasets import *\n\n# Data sets\nloaders = [load_mammographic, load_haberman, load_indian_liver,\n load_diabetes, load_credit]\ndataName = [\"mammographic\", \"haberman\", \"indian_liver\", \n \"diabetes\", \"credit\"]\n\n\ndef runCMRC(phi, loss):\n\n res_mean = np.zeros(len(dataName))\n res_std = np.zeros(len(dataName))\n\n # We fix the random seed to that the stratified kfold performed\n # is the same through the different executions\n random_seed = 0\n\n # Iterate through each of the dataset and fit the CMRC classfier.\n for j, load in enumerate(loaders):\n\n # Loading the dataset\n X, Y = load(return_X_y=True)\n r = len(np.unique(Y))\n n, d = X.shape\n\n # Print the dataset name\n print(\" ############## \\n \" + dataName[j] + \" n= \" + str(n) +\n \" , d= \" + str(d) + \", cardY= \" + str(r))\n\n # Create the CMRC object initilized with the corresponding parameters\n clf = CMRC(phi=phi, loss=loss, use_cvx=True,\n solver='MOSEK', max_iters=10000, s=0.3)\n\n # Generate the partitions of the stratified cross-validation\n cv = StratifiedKFold(n_splits=10, random_state=random_seed,\n shuffle=True)\n\n cvError = list()\n auxTime = 0\n\n # Paired and stratified cross-validation\n for train_index, test_index in cv.split(X, Y):\n\n X_train, X_test = X[train_index], X[test_index]\n y_train, y_test = Y[train_index], Y[test_index]\n\n # Normalizing the data\n std_scale = preprocessing.StandardScaler().fit(X_train, y_train)\n X_train = std_scale.transform(X_train)\n X_test = std_scale.transform(X_test)\n\n # Save start time for computing training time\n startTime = time.time()\n\n # Train the model\n clf.fit(X_train, y_train)\n\n # Save the training time\n auxTime += time.time() - startTime\n \n # Predict the class for test instances\n y_pred = clf.predict(X_test)\n\n # Calculate the error made by CMRC classificator\n cvError.append(np.average(y_pred != y_test))\n\n res_mean[j] = np.average(cvError)\n res_std[j] = np.std(cvError)\n\n # Calculating the mean training time\n auxTime = auxTime / 10\n\n print(\" error= \" + \": \" + str(res_mean[j]) + \" +/- \" +\n str(res_std[j]) + \"\\n avg_train_time= \" + \": \" +\n str(auxTime) + ' secs' + \"\\n ############## \\n\\n\")\n\n\nif __name__ == '__main__':\n\n print('*** Example (CMRC with the additional marginal constraints) *** \\n\\n')\n\n print('1. Using 0-1 loss and relu feature mapping \\n\\n')\n runCMRC(phi='relu', loss='0-1')\n\n print('2. Using log loss and relu feature mapping \\n\\n')\n runCMRC(phi='relu', loss='log')"
"import time\n\nimport numpy as np\nfrom sklearn import preprocessing\nfrom sklearn.model_selection import StratifiedKFold\n\nfrom MRCpy import CMRC\n# Import the datasets\nfrom MRCpy.datasets import *\n\n# Data sets\nloaders = [load_mammographic, load_haberman, load_indian_liver,\n load_diabetes, load_credit]\ndataName = [\"mammographic\", \"haberman\", \"indian_liver\",\n \"diabetes\", \"credit\"]\n\n\ndef runCMRC(phi, loss):\n\n res_mean = np.zeros(len(dataName))\n res_std = np.zeros(len(dataName))\n\n # We fix the random seed to that the stratified kfold performed\n # is the same through the different executions\n random_seed = 0\n\n # Iterate through each of the dataset and fit the CMRC classfier.\n for j, load in enumerate(loaders):\n\n # Loading the dataset\n X, Y = load(return_X_y=True)\n r = len(np.unique(Y))\n n, d = X.shape\n\n # Print the dataset name\n print(\" ############## \\n \" + dataName[j] + \" n= \" + str(n) +\n \" , d= \" + str(d) + \", cardY= \" + str(r))\n\n # Create the CMRC object initilized with the corresponding parameters\n clf = CMRC(phi=phi, loss=loss, use_cvx=True,\n solver='MOSEK', max_iters=10000, s=0.3)\n\n # Generate the partitions of the stratified cross-validation\n cv = StratifiedKFold(n_splits=10, random_state=random_seed,\n shuffle=True)\n\n cvError = list()\n auxTime = 0\n\n # Paired and stratified cross-validation\n for train_index, test_index in cv.split(X, Y):\n\n X_train, X_test = X[train_index], X[test_index]\n y_train, y_test = Y[train_index], Y[test_index]\n\n # Normalizing the data\n std_scale = preprocessing.StandardScaler().fit(X_train, y_train)\n X_train = std_scale.transform(X_train)\n X_test = std_scale.transform(X_test)\n\n # Save start time for computing training time\n startTime = time.time()\n\n # Train the model\n clf.fit(X_train, y_train)\n\n # Save the training time\n auxTime += time.time() - startTime\n\n # Predict the class for test instances\n y_pred = clf.predict(X_test)\n\n # Calculate the error made by CMRC classificator\n cvError.append(np.average(y_pred != y_test))\n\n res_mean[j] = np.average(cvError)\n res_std[j] = np.std(cvError)\n\n # Calculating the mean training time\n auxTime = auxTime / 10\n\n print(\" error= \" + \": \" + str(res_mean[j]) + \" +/- \" +\n str(res_std[j]) + \"\\n avg_train_time= \" + \": \" +\n str(auxTime) + ' secs' + \"\\n ############## \\n\\n\")\n\n\nif __name__ == '__main__':\n\n print('*** Example (CMRC with the additional\\\n marginal constraints) *** \\n\\n')\n\n print('1. Using 0-1 loss and relu feature mapping \\n\\n')\n runCMRC(phi='relu', loss='0-1')\n\n print('2. Using log loss and relu feature mapping \\n\\n')\n runCMRC(phi='relu', loss='log')"
]
}
],
Expand Down
11 changes: 6 additions & 5 deletions _downloads/ce615eeeaf183d728a05b647cb915a04/plot_example1.py
Expand Up @@ -5,11 +5,12 @@
Example: Use of MRC with different settings
===========
Example of using MRC with some of the common classification datasets with different
losses and feature mappings settings. We load the different datasets and use 10-Fold
Cross-Validation to generate the partitions for train and test. We separate 1 partition
each time for testing and use the others for training. On each iteration we calculate
the classification error as well as the upper and lower bounds for the error. We also
Example of using MRC with some of the common classification datasets with
different losses and feature mappings settings. We load the different datasets
and use 10-Fold Cross-Validation to generate the partitions for train and test.
We separate 1 partition each time for testing and use the others for training.
On each iteration we calculate the classification error as well as the upper
and lower bounds for the error. We also
calculate the mean training time.
You can check a more elaborated example in :ref:`ex_comp`.
Expand Down
54 changes: 0 additions & 54 deletions _downloads/cffe955e48c6ac5c42cd9ae87562553d/example2.ipynb

This file was deleted.

0 comments on commit ba163f2

Please sign in to comment.