Skip to content

Commit

Permalink
torch 0.4.1
Browse files Browse the repository at this point in the history
  • Loading branch information
shayneobrien committed Aug 1, 2018
1 parent 91ca2bc commit f949337
Show file tree
Hide file tree
Showing 10 changed files with 13 additions and 743 deletions.
2 changes: 1 addition & 1 deletion README.md
@@ -1,5 +1,5 @@
# OVERVIEW
PyTorch version: 0.4.0 | Python 3.6.5
PyTorch version: 0.4.1 | Python 3.6.5

Commented / annotated implementations and comparative introductions for minimax, non-saturating, wasserstein, wasserstein gradient penalty, least squares, deep regret analytic, bounded equilibrium generative adversarial networks, relativistic (GANs), and variational autoencoder (VAE). Paper links are supplied at the beginning of each file with a short summary of the paper. See src folder for files to run via terminal, or notebooks folder for Jupyter notebooks via your local browser. The Jupyter notebooks contain in-notebook visualizations. All code in this repository operates in a generative, unsupervised manner on binary MNIST.

Expand Down
4 changes: 2 additions & 2 deletions notebooks/01-nonsaturating-gan.ipynb
Expand Up @@ -507,7 +507,7 @@
" \n",
" def forward(self, x):\n",
" activated = F.relu(self.linear(x))\n",
" generation = F.sigmoid(self.generate(activated))\n",
" generation = torch.sigmoid(self.generate(activated))\n",
" return generation\n",
"\n",
"\n",
Expand All @@ -521,7 +521,7 @@
" \n",
" def forward(self, x):\n",
" activated = F.relu(self.linear(x))\n",
" discrimination = F.sigmoid(self.discriminate(activated))\n",
" discrimination = torch.sigmoid(self.discriminate(activated))\n",
" return discrimination\n",
"\n",
"\n",
Expand Down
6 changes: 3 additions & 3 deletions notebooks/02-minimax-gan.ipynb
Expand Up @@ -481,7 +481,7 @@
"the probability that a sample came from the training data rather than G. The training\n",
"procedure for G is to maximize the probability of D making a mistake.'\n",
"\"\"\"\n",
"\n",
"F.\n",
"import torch, torchvision\n",
"import torch.nn as nn\n",
"import torch.nn.functional as F\n",
Expand Down Expand Up @@ -519,7 +519,7 @@
" \n",
" def forward(self, x):\n",
" activated = F.relu(self.linear(x))\n",
" generation = F.sigmoid(self.generate(activated))\n",
" generation = torch.sigmoid(self.generate(activated))\n",
" return generation\n",
"\n",
"\n",
Expand All @@ -534,7 +534,7 @@
" \n",
" def forward(self, x):\n",
" activated = F.relu(self.linear(x))\n",
" discrimination = F.sigmoid(self.discriminate(activated))\n",
" discrimination = torch.sigmoid(self.discriminate(activated))\n",
" return discrimination\n",
"\n",
"\n",
Expand Down
2 changes: 1 addition & 1 deletion notebooks/03-wasserstein-gan.ipynb
Expand Up @@ -1780,7 +1780,7 @@
" \n",
" def forward(self, x):\n",
" activated = F.relu(self.linear(x))\n",
" generation = F.sigmoid(self.generate(activated))\n",
" generation = torch.sigmoid(self.generate(activated))\n",
" return generation\n",
"\n",
"\n",
Expand Down
2 changes: 1 addition & 1 deletion notebooks/04-wasserstein-gan-gradient-penalty.ipynb
Expand Up @@ -498,7 +498,7 @@
" \n",
" def forward(self, x):\n",
" activated = F.relu(self.linear(x))\n",
" generation = F.sigmoid(self.generate(activated))\n",
" generation = torch.sigmoid(self.generate(activated))\n",
" return generation\n",
"\n",
"\n",
Expand Down
4 changes: 2 additions & 2 deletions notebooks/05-least-squares-gan.ipynb
Expand Up @@ -507,7 +507,7 @@
" \n",
" def forward(self, x):\n",
" activated = F.relu(self.linear(x))\n",
" generation = F.sigmoid(self.generate(activated))\n",
" generation = torch.sigmoid(self.generate(activated))\n",
" return generation\n",
"\n",
"\n",
Expand All @@ -523,7 +523,7 @@
" \n",
" def forward(self, x):\n",
" activated = F.relu(self.linear(x))\n",
" discrimination = F.sigmoid(self.discriminate(activated))\n",
" discrimination = torch.sigmoid(self.discriminate(activated))\n",
" return discrimination\n",
"\n",
"\n",
Expand Down
4 changes: 2 additions & 2 deletions notebooks/06-deep-regret-analytic-gan.ipynb
Expand Up @@ -1018,7 +1018,7 @@
" \n",
" def forward(self, x):\n",
" activated = F.relu(self.linear(x))\n",
" generation = F.sigmoid(self.generate(activated))\n",
" generation = torch.sigmoid(self.generate(activated))\n",
" return generation\n",
"\n",
"\n",
Expand All @@ -1034,7 +1034,7 @@
" \n",
" def forward(self, x):\n",
" activated = F.relu(self.linear(x))\n",
" discrimination = F.sigmoid(self.discriminate(activated))\n",
" discrimination = torch.sigmoid(self.discriminate(activated))\n",
" return discrimination\n",
"\n",
"\n",
Expand Down
484 changes: 0 additions & 484 deletions notebooks/08-variational-autoencoder.ipynb

This file was deleted.

246 changes: 0 additions & 246 deletions notebooks/09-standard-autoencoder.ipynb

This file was deleted.

2 changes: 1 addition & 1 deletion requirements.txt
Expand Up @@ -2,5 +2,5 @@ ipython
jupyter
matplotlib
tqdm
torch==0.4.0
torch==0.4.1
torchvision

0 comments on commit f949337

Please sign in to comment.