From 313a1393c93921f620fe2837f0aba4889b294170 Mon Sep 17 00:00:00 2001 From: Garg-Doppler Date: Mon, 27 Jul 2020 19:18:11 +0530 Subject: [PATCH 1/3] Formatting and Suggestions --- ANN/index.html | 551 +- Adversarial_Lab/index.html | 148 +- Autoencoder/index.html | 282 +- Logistic Regression/index.html | 146 +- MNIST-CNN/README.md | 4 + MNIST-CNN/index.html | 35 +- MNIST-CNN/index.js | 41 +- MNIST-CNN/tfjs-examples.css | 14201 +++++++++++++++++------------ MNIST-CNN/ui.js | 310 +- PCA/index.html | 3 +- SVM/README.md | 4 + SVM/index.html | 10 +- SVM/index.js | 40 + img/favicon.ico | Bin 0 -> 15406 bytes neural_style_transfer/index.html | 4 +- vanishing_gradients/index.html | 1223 +-- 16 files changed, 9559 insertions(+), 7443 deletions(-) create mode 100644 img/favicon.ico diff --git a/ANN/index.html b/ANN/index.html index 5036b14..2c4c7a8 100644 --- a/ANN/index.html +++ b/ANN/index.html @@ -18,285 +18,298 @@ - - - - - + + + + VisualML | ANN + + + - - - -
-
-

TensorFlow.js Layers: Iris Demo

-

Classify structured (tabular) data with a neural network.

-
- -
-

Description

-

- This example uses a neural network to classify tabular data representing different flowers. The data used for - each flower are the petal length and width as well as the sepal length and width. The goal - is to predict what kind of flower it is based on those features of each data point. The - data comes from the famous Iris flower data - set. -

-
- -
-

Instructions

-

- -

-

- Change the hyperparameters as you would like them to be. -

-

- Add the number of neurons for the the number of layers you want to have in the required neural network. -

-

- Train the model.

A Model Summery Tab will appear you can maximise it or hide it. -

-

- You can visualize the architecture by clicking on the NN Structure button.

- If you want to visualize the coloured edges(coloured according to their weight sign),you can click on the - checkbox and click on NN Structure again, the edges will appear coloured and varied in width and color - intensity on the basis of the weight magnitude. -

-

- You can edit the properties in first row of "Test Examples" to generate - a prediction for those data points. -

-
-
-

Data Visualization

- -
-
-

Controls

- -
-

Train Model

-
-
- - -
-
- - +
+
+ + +
+
+ Learning Rate: + +
+
+ Batch Size: + +
+ +
+ + Hidden layer no.: + + Number of neurons: + + +
+ + + +
+
+
+ +
+

Status

+
+ Standing by. +
+
+ +
+

Training Progress

+
+
+

Loss

+
+
+
+

Accuracy

+
+
+
+

Confusion Matrix (on validation set)

+
+
+
+
+
+

Visualization of Neural Network

+ +
+ +
+
+ + +
+
+ + +
+
+
+

Test Examples

+ +
+ + + + + + + + + + + + + + + + + + + + + +
Petal lengthPetal widthSepal lengthSepal widthTrue classPredicted classClass Probabilities
+ + + + + + + + + + + + + + + +
+
+
+ + + +
+
+ + + +
+ +
+ + +
-
- Learning Rate: - -
-
- Batch Size: - -
- -
- - Hidden layer no.: - - Number of neurons: - - -
- - - -
- - - -
-

Status

-
- Standing by.
-
- -
-

Training Progress

-
-
-

Loss

-
-
-
-

Accuracy

-
-
-
-

Confusion Matrix (on validation set)

-
-
-
-
-
-

Visualization of Neural Network

- -
- -
-
- - -
-
- - -
-
-
-

Test Examples

- -
- - - - - - - - - - - - - - - - - - - - - -
Petal lengthPetal widthSepal lengthSepal widthTrue classPredicted classClass Probabilities
- - - - - - - - - - - - - - - -
-
-
- + -
-
- - - -
- -
- - -
-
- - - - - + - + \ No newline at end of file diff --git a/Adversarial_Lab/index.html b/Adversarial_Lab/index.html index 3e0c9f9..79a1aba 100644 --- a/Adversarial_Lab/index.html +++ b/Adversarial_Lab/index.html @@ -1,105 +1,109 @@ - + + - + - - Adversarial Attack - + + VisualML | Advesarial Attack + -
-
-
-
-

Fast Gradient Sign Method (Untargeted)

+
-
Original Image -
- -
-
Perturbation -
-
-
Perturbed Image -
- -
-
-
-
-
-
-
-
- - -
-
- - -
-
- - +
+
+ + +
+
+ + -
+ -
- -
-
-
-
-
+
+ +
+
+ +
+ - - - - - - - + + + + + + \ No newline at end of file diff --git a/Autoencoder/index.html b/Autoencoder/index.html index 9ceff5c..b665c9f 100644 --- a/Autoencoder/index.html +++ b/Autoencoder/index.html @@ -1,175 +1,179 @@ - + + - MNIST in TensorFlow.js Layers API - - - - - - + VisualML | Autoencoder + + + + + + + -

TensorFlow.js: MNIST Autoencoder

+

TensorFlow.js: MNIST Autoencoder

TensorFlow.js: MNIST Autoencoder

-
-
-
-

Train a model to autoencode handwritten digits from the MNIST database using the tf.layers - api. +


- This examples lets you train a MNIST Autoencoder using a Fully Connected Neural Network (also known as a DenseNet).

- You can select the structure for the DenseNet and see the performance of the model. -
The MNIST dataset is used as training data.

-
-
-
-

Training Parameters

+
+

Train a model to autoencode handwritten digits from the MNIST database using the tf.layers api. +
This examples lets you train a MNIST Autoencoder using a Fully Connected Neural Network (also known as a DenseNet).

You can select the structure for the DenseNet and see the performance of the model. +
The MNIST dataset is used as training data.

+
+
+
+

+

Training Parameters

+

-
-
- +
+
+ - -
+ +
-
- - -
+
+ + +
-
- - -
+
+ + +
-
- - -
+
+ + +
-
- - -
+
+ + +
-
- - -
-
- - -
+
+ + +
+
+ + +
- - -
-
-
-
+ + + +
+
+ -

-
+

+
-

This will show the examples of autoencoder once it its trained

-
-
+
+

This will show the examples of autoencoder once it its trained

+
+
-


-
-

This is for 2d plot visualization of latent space of autoencoder.
If your latent space dimension is set to 2D

- -
-
-
+


+
+

This is for 2d plot visualization of latent space of autoencoder.
If your latent space dimension is set to 2D

+ +
+
+
-


-

This is for autoencoding your drawing on the canvas

- - - - - -
-
+


+
+

This is for autoencoding your drawing on the canvas

+ + + + + +
+
- - - - - - - - - - + + + + + + + + + + - + @@ -177,7 +181,7 @@

This is for 2d plot visualization of latent space of autoencoder.
If you - + - + \ No newline at end of file diff --git a/Logistic Regression/index.html b/Logistic Regression/index.html index 5ca569d..16f1a0a 100644 --- a/Logistic Regression/index.html +++ b/Logistic Regression/index.html @@ -1,47 +1,53 @@ + - Logistic Regression Visualizer - - + } + + -

LOGISTIC REGRESSION

-
-
+

LOGISTIC REGRESSION

+
+
-


+


-
+        
 Logistic regression is a classification algorithm used to assign observations to a discrete set of
 
 classes. Unlike linear regression which outputs continuous number values, logistic regression
@@ -67,24 +73,24 @@ 

Decision Boundary

-

Visualization for decision boundary of 2 Classes

-
-
+

Visualization for decision boundary of 2 Classes

+
+
-

- -
+

+ +
-

- -
+

+ +
-

- -
+

+ +
- - @@ -92,31 +98,31 @@

Decision Boundary

-
- - -
- -



-
- -

- - -
-
- -
-
Plot Area
- -
Loss Area
- -

-DSG-IITR -
- - - +
+ + +
+ +



+
+ +

+ + +
+
+ +
+
Plot Area
+ +
Loss Area
+ +

DSG-IITR +
+ + + - + + \ No newline at end of file diff --git a/MNIST-CNN/README.md b/MNIST-CNN/README.md index 39da37d..6305c39 100644 --- a/MNIST-CNN/README.md +++ b/MNIST-CNN/README.md @@ -38,3 +38,7 @@ The package contains two scripts: * [Vipul Bansal](https://github.com/vipul2001) * [Yash Vardhan Sharma](https://github.com/Yash-Vardhan-Sharma) * [Aaryan Garg](https://github.com/Garg-Doppler) + +### References + +* [tfjs-examples](https://github.com/tensorflow/tfjs-examples) diff --git a/MNIST-CNN/index.html b/MNIST-CNN/index.html index 33af258..9e182fd 100644 --- a/MNIST-CNN/index.html +++ b/MNIST-CNN/index.html @@ -1,10 +1,11 @@ - MNIST in TensorFlow.js Layers API + VisualML | CNN + - - - - -
-
-

Vanishing Gradients


-

Comparing the activation functions

-
- -
-

Description

-

- This example explains the problem of vanishing gradients (which you may encounter when training a deep neural - network) and also use some activation functions to prevent it. It describes the situation where a deep - multilayer feed-forward network or a recurrent neural network is unable to propagate useful gradient information - from the output end of the model back to the layers near the input end of the model. -

-

- Many fixes and workarounds have been proposed and investigated, such as alternate weight initialization schemes, - unsupervised pre-training, layer-wise training, and variations on gradient descent. Perhaps the most common change - is the use of the rectified linear activation function and its modifications. -


-
-
-

How the choice of activation function avoids vanishing gradients?


-

- Activation functions, like the sigmoid function, squishes a large input space into a small input space between 0 and 1. - Therefore, a large change in the input of the sigmoid function will cause a small change in the output. Hence, the - derivative becomes small.A small gradient means that the weights and biases of the initial layers will not be updated - effectively with each training session. Since these initial layers are often crucial to recognizing the core elements of - the input data, it can lead to overall inaccuracy of the whole network. -

-

- The simplest solution is to use other activation functions, such as leakyRelu, Relu etc which don't cause a small derivatives. - The really nice thing about Relu is the the gradient is either 0 or 1, which means it never saturates, and so - gradients can’t vanish — they are transferred perfectly across a network. However problem of dead relu may exist i.e. - situation may come when grads may become exactly 0 but this problem is solved by its modification LeakyRelu. -


-

You can visualize it by plotting these activation functions and many more by the options given below...

-
-
-
-
-
    -
  • - - -
  • -
  • - - -
  • -
  • - -
  • -
  • - -
  • -
  • - -
  • -
  • - -
  • -
  • - -
  • -
  • - -
  • -
  • - -
  • -
  • - -
  • -
  • - -
  • -
-
-
-
-
-
-
- -
- - - -
+ -
- - -
+ + + + +
+
+

Vanishing Gradients


+

Comparing the activation functions

+
+ +
+

Description

+

+ This example explains the problem of vanishing gradients (which you may encounter when training a deep neural network) and also use some activation functions to prevent it. It describes the situation where a deep multilayer feed-forward network or a recurrent + neural network is unable to propagate useful gradient information from the output end of the model back to the layers near the input end of the model. +

+

+ Many fixes and workarounds have been proposed and investigated, such as alternate weight initialization schemes, unsupervised pre-training, layer-wise training, and variations on gradient descent. Perhaps the most common change is the use of the rectified + linear activation function and its modifications. +


+
+
+

How the choice of activation function avoids vanishing gradients?


+

+ Activation functions, like the sigmoid function, squishes a large input space into a small input space between 0 and 1. Therefore, a large change in the input of the sigmoid function will cause a small change in the output. Hence, the derivative becomes + small.A small gradient means that the weights and biases of the initial layers will not be updated effectively with each training session. Since these initial layers are often crucial to recognizing the core elements of the input data, + it can lead to overall inaccuracy of the whole network. +

+

+ The simplest solution is to use other activation functions, such as leakyRelu, Relu etc which don't cause a small derivatives. The really nice thing about Relu is the the gradient is either 0 or 1, which means it never saturates, and so gradients can’t + vanish — they are transferred perfectly across a network. However problem of dead relu may exist i.e. situation may come when grads may become exactly 0 but this problem is solved by its modification LeakyRelu. +


+

You can visualize it by plotting these activation functions and many more by the options given below...

+
+
+
+ +
    +
  • + + +
  • +
  • + + +
  • +
  • + +
  • +
  • + +
  • +
  • + +
  • +
  • + +
  • +
  • + +
  • +
  • + +
  • +
  • + +
  • +
  • + +
  • +
  • + +
  • +
+ +
+
+
+
+
+ +
+ + + +
+ +
+ + + +
+ +
+
+
+ +
+
+
+
+
+
+

About the dataset and model

+

+ This example uses a fully connected neural network . The data used for each flower are the petal length and width as well as the sepal length and width. The data comes from the famous + Iris flower data set. +

+
+
+

Instructions

+

+

    +
  1. +

    Using the options below you can set the activation function, num_layers, num_neurons_per_Layer, batch size, learning rate, num_iterations according to your choice.

    +
  2. +
  3. +

    You can visualize the neural network of your choice and visualize the gradients w.r.t each weight by analyzing the intensity of links connecting the neurons. Positive gradients are represented by blue links and negative gradients + are represented by red links.
    Note that gradient values at final iteration are used in printing nn architecture.

    +
  4. +
  5. +

    In each iteration, input of chosen batch size is randomly selected from 120 egs out of 150 egs provided by iris dataset and then model parameters are optimized using gradient descent. Rest 30 egs are for validation. Also you + can see value of loss at each iteration in console.

    +
  6. +
  7. +

    Also a plot (Loss vs iteration) will show up on clicking the given button.

    +
  8. +
  9. +

    It is strictly advised to keep batch size greater than 1, otherwise you may encounter exploding gradients problem showing black colour links all over in architecture.

    +
  10. +
  11. +

    Wait for some time after clicking the button.

    +
  12. +
  13. +

    Change the custom parameters and again press the button to train the changed neural network.

    +
  14. +
+

+
+ +
+

Controls

+ +
+

Train Model

+
+
+ + +
+ +
+ + +
+ +
+ + +
+
+ + +
+
+ + +
+
+ + +
+ +
+

+
+ +


+
+ +

+
+
+ +

+
+


+ +

+ + + + +
+
+ + + +
+ +
+ + +
+
-
-
- - + + \ No newline at end of file From 908ece010a2e75d460b6a6f98b38e81d21709a2e Mon Sep 17 00:00:00 2001 From: Garg-Doppler Date: Mon, 27 Jul 2020 19:21:48 +0530 Subject: [PATCH 2/3] Formatting and Suggestions --- Adversarial_Lab/index.html | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Adversarial_Lab/index.html b/Adversarial_Lab/index.html index 79a1aba..fbee092 100644 --- a/Adversarial_Lab/index.html +++ b/Adversarial_Lab/index.html @@ -9,7 +9,7 @@ - VisualML | Advesarial Attack + VisualML | Adversarial Attack