Skip to content

Commit

Permalink
PyTorch now requires float Tensor for gradient computation
Browse files Browse the repository at this point in the history
  • Loading branch information
Atcold committed Nov 4, 2018
1 parent 659a4ae commit 50a4b46
Showing 1 changed file with 4 additions and 4 deletions.
8 changes: 4 additions & 4 deletions 03-autograd_tutorial.ipynb
Expand Up @@ -221,7 +221,7 @@
"metadata": {},
"outputs": [],
"source": [
"x = torch.arange(1, n + 1, requires_grad=True)\n",
"x = torch.arange(1., n + 1, requires_grad=True)\n",
"w = torch.ones(n, requires_grad=True)\n",
"z = w @ x\n",
"z.backward()\n",
Expand All @@ -234,7 +234,7 @@
"metadata": {},
"outputs": [],
"source": [
"x = torch.arange(1, n + 1)\n",
"x = torch.arange(1., n + 1)\n",
"w = torch.ones(n, requires_grad=True)\n",
"z = w @ x\n",
"z.backward()\n",
Expand All @@ -248,7 +248,7 @@
"outputs": [],
"source": [
"with torch.no_grad():\n",
" x = torch.arange(1, n + 1)\n",
" x = torch.arange(1., n + 1)\n",
" w = torch.ones(n, requires_grad=True)\n",
" z = w @ x\n",
" z.backward()\n",
Expand Down Expand Up @@ -283,7 +283,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
"version": "3.6.6"
}
},
"nbformat": 4,
Expand Down

0 comments on commit 50a4b46

Please sign in to comment.