Skip to content

Commit ad73565

Browse files
committed
更新60分钟入门pytorch
1 parent 931f533 commit ad73565

22 files changed

+1719
-5038
lines changed

8.deep-learning/PyTorch_beginner/1.Tensors.ipynb

Lines changed: 527 additions & 0 deletions
Large diffs are not rendered by default.

8.deep-learning/PyTorch_beginner/2.Autograd自动求导.ipynb

Lines changed: 452 additions & 0 deletions
Large diffs are not rendered by default.
Lines changed: 68 additions & 112 deletions
Original file line numberDiff line numberDiff line change
@@ -4,43 +4,41 @@
44
"cell_type": "markdown",
55
"metadata": {},
66
"source": [
7-
"# 60分钟入门深度学习工具-PyTorch(三、神经网络)\n",
8-
"**作者**:Soumith Chintala\n",
7+
"## 前言\n",
98
"\n",
9+
"原文翻译自:[Deep Learning with PyTorch: A 60 Minute Blitz](https://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html)\n",
1010
"\n",
11-
"原文翻译自:https://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html\n",
12-
" \n",
13-
"中文翻译、注释制作:黄海广\n",
11+
"翻译:[林不清](https://www.zhihu.com/people/lu-guo-92-42-88)\n",
1412
"\n",
15-
"github:https://github.com/fengdu78\n",
13+
"整理:机器学习初学者公众号 ![gongzhong](images/gongzhong.jpg)\n",
1614
"\n",
17-
"代码全部测试通过。\n",
15+
"## 目录\n",
1816
"\n",
19-
"配置环境:PyTorch 1.0,python 3.6,\n",
17+
"[60分钟入门PyTorch(一)——Tensors](https://zhuanlan.zhihu.com/p/347676809)\n",
2018
"\n",
21-
"主机:显卡:一块1080ti;内存:32g(注:绝大部分代码不需要GPU)\n",
22-
"![公众号](images/gongzhong.jpg)\n",
23-
"### 目录\n",
24-
"* 1.[Pytorch是什么?](60分钟入门PyTorch-1.PyTorch是什么?.ipynb)\n",
25-
"* 2.[AUTOGRAD](60分钟入门PyTorch-2.AUTOGRAD.ipynb)\n",
26-
"* 3.[神经网络](60分钟入门PyTorch-3.神经网络.ipynb)\n",
27-
"* 4.[训练一个分类器](60分钟入门PyTorch-4.训练一个分类器.ipynb)\n",
28-
"* 5.[数据并行](60分钟入门PyTorch-5.数据并行.ipynb)"
19+
"[60分钟入门PyTorch(二)——Autograd自动求导](https://zhuanlan.zhihu.com/p/347672836)\n",
20+
"\n",
21+
"[60分钟入门Pytorch(三)——神经网络](https://zhuanlan.zhihu.com/p/347678492)\n",
22+
"\n",
23+
"[60分钟入门PyTorch(四)——训练一个分类器](https://zhuanlan.zhihu.com/p/347681137)"
2924
]
3025
},
3126
{
32-
"cell_type": "markdown",
27+
"cell_type": "code",
28+
"execution_count": 1,
3329
"metadata": {},
30+
"outputs": [],
3431
"source": [
35-
"# 三、神经网络"
32+
"%matplotlib inline"
3633
]
3734
},
3835
{
3936
"cell_type": "markdown",
4037
"metadata": {},
4138
"source": [
42-
"可以使用`torch.nn`包来构建神经网络.\n",
39+
"# 神经网络\n",
4340
"\n",
41+
"可以使用`torch.nn`包来构建神经网络.\n",
4442
"你已知道`autograd`包,`nn`包依赖`autograd`包来定义模型并求导.一个`nn.Module`包含各个层和一个`forward(input)`方法,该方法返回`output`."
4543
]
4644
},
@@ -75,17 +73,17 @@
7573
},
7674
{
7775
"cell_type": "code",
78-
"execution_count": 1,
76+
"execution_count": 2,
7977
"metadata": {},
8078
"outputs": [
8179
{
8280
"name": "stdout",
8381
"output_type": "stream",
8482
"text": [
8583
"Net(\n",
86-
" (conv1): Conv2d(1, 6, kernel_size=(5, 5), stride=(1, 1))\n",
87-
" (conv2): Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1))\n",
88-
" (fc1): Linear(in_features=400, out_features=120, bias=True)\n",
84+
" (conv1): Conv2d(1, 6, kernel_size=(3, 3), stride=(1, 1))\n",
85+
" (conv2): Conv2d(6, 16, kernel_size=(3, 3), stride=(1, 1))\n",
86+
" (fc1): Linear(in_features=576, out_features=120, bias=True)\n",
8987
" (fc2): Linear(in_features=120, out_features=84, bias=True)\n",
9088
" (fc3): Linear(in_features=84, out_features=10, bias=True)\n",
9189
")\n"
@@ -97,16 +95,17 @@
9795
"import torch.nn as nn\n",
9896
"import torch.nn.functional as F\n",
9997
"\n",
98+
"\n",
10099
"class Net(nn.Module):\n",
101100
"\n",
102101
" def __init__(self):\n",
103102
" super(Net, self).__init__()\n",
104-
" # 1 input image channel, 6 output channels, 5x5 square convolution\n",
103+
" # 1 input image channel, 6 output channels, 3x3 square convolution\n",
105104
" # kernel\n",
106-
" self.conv1 = nn.Conv2d(1, 6, 5)\n",
107-
" self.conv2 = nn.Conv2d(6, 16, 5)\n",
105+
" self.conv1 = nn.Conv2d(1, 6, 3)\n",
106+
" self.conv2 = nn.Conv2d(6, 16, 3)\n",
108107
" # an affine operation: y = Wx + b\n",
109-
" self.fc1 = nn.Linear(16 * 5 * 5, 120)\n",
108+
" self.fc1 = nn.Linear(16 * 6 * 6, 120) # 6*6 from image dimension \n",
110109
" self.fc2 = nn.Linear(120, 84)\n",
111110
" self.fc3 = nn.Linear(84, 10)\n",
112111
"\n",
@@ -135,9 +134,7 @@
135134
},
136135
{
137136
"cell_type": "markdown",
138-
"metadata": {
139-
"collapsed": true
140-
},
137+
"metadata": {},
141138
"source": [
142139
"你只需定义`forward`函数,`backward`函数(计算梯度)在使用`autograd`时自动为你创建.你可以在`forward`函数中使用`Tensor`的任何操作。\n",
143140
"\n",
@@ -146,42 +143,42 @@
146143
},
147144
{
148145
"cell_type": "code",
149-
"execution_count": 2,
146+
"execution_count": 3,
150147
"metadata": {},
151148
"outputs": [
152149
{
153150
"name": "stdout",
154151
"output_type": "stream",
155152
"text": [
156153
"10\n",
157-
"torch.Size([6, 1, 5, 5])\n"
154+
"torch.Size([6, 1, 3, 3])\n"
158155
]
159156
}
160157
],
161158
"source": [
162159
"params = list(net.parameters())\n",
163160
"print(len(params))\n",
164-
"print(params[0].size())"
161+
"print(params[0].size()) # conv1's .weight"
165162
]
166163
},
167164
{
168165
"cell_type": "markdown",
169166
"metadata": {},
170167
"source": [
171-
"`forward`的输入和输出都是`autograd.Variable`.注意:这个网络(LeNet)期望的输入大小是32\\*32.如果使用MNIST数据集来训练这个网络,请把图片大小重新调整到32*32."
168+
"构造一个随机的32*32的输入,注意:这个网络(LeNet)期望的输入大小是32*32.如果使用MNIST数据集来训练这个网络,请把图片大小重新调整到32*32."
172169
]
173170
},
174171
{
175172
"cell_type": "code",
176-
"execution_count": 3,
173+
"execution_count": 4,
177174
"metadata": {},
178175
"outputs": [
179176
{
180177
"name": "stdout",
181178
"output_type": "stream",
182179
"text": [
183-
"tensor([[-0.1217, 0.0449, -0.0392, -0.1103, -0.0534, -0.1108, -0.0565, 0.0116,\n",
184-
" 0.0867, 0.0102]], grad_fn=<AddmmBackward>)\n"
180+
"tensor([[-0.0765, 0.0522, 0.0820, 0.0109, 0.0004, 0.0184, 0.1024, 0.0509,\n",
181+
" 0.0917, -0.0164]], grad_fn=<AddmmBackward>)\n"
185182
]
186183
}
187184
],
@@ -193,19 +190,15 @@
193190
},
194191
{
195192
"cell_type": "markdown",
196-
"metadata": {
197-
"collapsed": true
198-
},
193+
"metadata": {},
199194
"source": [
200195
"将所有参数的梯度缓存清零,然后进行随机梯度的的反向传播."
201196
]
202197
},
203198
{
204199
"cell_type": "code",
205-
"execution_count": 4,
206-
"metadata": {
207-
"collapsed": true
208-
},
200+
"execution_count": 5,
201+
"metadata": {},
209202
"outputs": [],
210203
"source": [
211204
"net.zero_grad()\n",
@@ -216,61 +209,49 @@
216209
"cell_type": "markdown",
217210
"metadata": {},
218211
"source": [
219-
"* 注意\n",
212+
"<div class=\"alert alert-info\"><h4>注意</h4>\n",
213+
"``torch.nn``只支持小批量输入,整个torch.nn包都只支持小批量样本,而不支持单个样本\n",
214+
"例如,``nn.Conv2d``将接受一个4维的张量,每一维分别是$nSamples\\times nChannels\\times Height\\times Width$(样本数*通道数*高*宽).\n",
215+
"如果你有单个样本,只需使用`input.unsqueeze(0)`来添加其它的维数.\n",
216+
"在继续之前,我们回顾一下到目前为止见过的所有类.\n",
220217
"\n",
221-
"* `torch.nn` 只支持小批量输入,整个`torch.nn`包都只支持小批量样本,而不支持单个样本\n",
222-
"* 例如,`nn.Conv2d`将接受一个4维的张量,每一维分别是$nSamples\\times nChannels\\times Height\\times Width$(样本数\\*通道数\\*高\\*宽).\n",
223-
"* 如果你有单个样本,只需使用`input.unsqueeze(0)`来添加其它的维数.\n",
224-
"\n",
225-
"在继续之前,我们回顾一下到目前为止见过的所有类."
226-
]
227-
},
228-
{
229-
"cell_type": "markdown",
230-
"metadata": {},
231-
"source": [
232218
"### 回顾\n",
233219
"\n",
234220
"* `torch.Tensor`-支持自动编程操作(如`backward()`)的多维数组。 同时保持梯度的张量。\n",
235221
"* `nn.Module`-神经网络模块.封装参数,移动到GPU上运行,导出,加载等\n",
236222
"* `nn.Parameter`-一种张量,当把它赋值给一个`Module`时,被自动的注册为参数.\n",
237-
"\n",
238223
"* `autograd.Function`-实现一个自动求导操作的前向和反向定义, 每个张量操作都会创建至少一个`Function`节点,该节点连接到创建张量并对其历史进行编码的函数。\n",
224+
"\n",
239225
"#### 现在,我们包含了如下内容:\n",
240226
"\n",
241227
"* 定义一个神经网络\n",
242228
"* 处理输入和调用`backward`\n",
243229
"\n",
230+
"\n",
244231
"#### 剩下的内容:\n",
245232
"\n",
246233
"* 计算损失值\n",
247-
"* 更新神经网络的权值"
248-
]
249-
},
250-
{
251-
"cell_type": "markdown",
252-
"metadata": {},
253-
"source": [
234+
"* 更新神经网络的权值\n",
235+
"\n",
254236
"### 损失函数\n",
255237
"一个损失函数接受一对(output, target)作为输入(output为网络的输出,target为实际值),计算一个值来估计网络的输出和目标值相差多少。\n",
256238
"\n",
257-
"在nn包中有几种不同的损失函数.一个简单的损失函数是:`nn.MSELoss`,它计算输入和目标之间的均方误差。\n",
239+
"在nn包中有几种不同的[损失函数](https://pytorch.org/docs/nn.html#loss-functions>).一个简单的损失函数是:`nn.MSELoss`,它计算输入和目标之间的均方误差。\n",
258240
"\n",
259-
"例如:"
241+
"例如:\n",
242+
"</div>"
260243
]
261244
},
262245
{
263246
"cell_type": "code",
264247
"execution_count": 6,
265-
"metadata": {
266-
"scrolled": false
267-
},
248+
"metadata": {},
268249
"outputs": [
269250
{
270251
"name": "stdout",
271252
"output_type": "stream",
272253
"text": [
273-
"tensor(0.5663, grad_fn=<MseLossBackward>)\n"
254+
"tensor(1.5801, grad_fn=<MseLossBackward>)\n"
274255
]
275256
}
276257
],
@@ -288,24 +269,13 @@
288269
"cell_type": "markdown",
289270
"metadata": {},
290271
"source": [
291-
"现在,你反向跟踪`loss`,使用它的`.grad_fn`属性,你会看到向下面这样的一个计算图:"
292-
]
293-
},
294-
{
295-
"cell_type": "markdown",
296-
"metadata": {},
297-
"source": [
298-
"input -> conv2d -> relu -> maxpool2d -> conv2d -> relu -> maxpool2d\n",
299-
" -> view -> linear -> relu -> linear -> relu -> linear\n",
300-
" -> MSELoss\n",
301-
" -> loss"
302-
]
303-
},
304-
{
305-
"cell_type": "markdown",
306-
"metadata": {},
307-
"source": [
308-
"所以, 当你调用`loss.backward()`,整个图被区分为损失以及图中所有具有`requires_grad = True`的张量,并且其`.grad` 张量的梯度累积。\n",
272+
"现在,你反向跟踪``loss``,使用它的``.grad_fn``属性,你会看到向下面这样的一个计算图:\n",
273+
" input -> conv2d -> relu -> maxpool2d -> conv2d -> relu -> maxpool2d\n",
274+
" -> view -> linear -> relu -> linear -> relu -> linear\n",
275+
" -> MSELoss\n",
276+
" -> loss\n",
277+
" \n",
278+
"所以, 当你调用``loss.backward()``,整个图被区分为损失以及图中所有具有``requires_grad = True``的张量,并且其``.grad`` 张量的梯度累积。\n",
309279
"\n",
310280
"为了说明,我们反向跟踪几步:"
311281
]
@@ -319,16 +289,16 @@
319289
"name": "stdout",
320290
"output_type": "stream",
321291
"text": [
322-
"<MseLossBackward object at 0x0000029E54C509B0>\n",
323-
"<AddmmBackward object at 0x0000029E54C50898>\n",
324-
"<AccumulateGrad object at 0x0000029E54C509B0>\n"
292+
"<MseLossBackward object at 0x0000023193A40E08>\n",
293+
"<AddmmBackward object at 0x0000023193A40E48>\n",
294+
"<AccumulateGrad object at 0x0000023193A40E08>\n"
325295
]
326296
}
327297
],
328298
"source": [
329299
"print(loss.grad_fn) # MSELoss\n",
330300
"print(loss.grad_fn.next_functions[0][0]) # Linear\n",
331-
"print(loss.grad_fn.next_functions[0][0].next_functions[0][0])"
301+
"print(loss.grad_fn.next_functions[0][0].next_functions[0][0]) # ReLU"
332302
]
333303
},
334304
{
@@ -353,7 +323,7 @@
353323
"conv1.bias.grad before backward\n",
354324
"tensor([0., 0., 0., 0., 0., 0.])\n",
355325
"conv1.bias.grad after backward\n",
356-
"tensor([ 0.0006, -0.0164, 0.0122, -0.0060, -0.0056, -0.0052])\n"
326+
"tensor([ 0.0013, 0.0068, 0.0096, 0.0039, -0.0105, -0.0016])\n"
357327
]
358328
}
359329
],
@@ -373,6 +343,7 @@
373343
"cell_type": "markdown",
374344
"metadata": {},
375345
"source": [
346+
"现在,我们知道了该如何使用损失函数\n",
376347
"#### 稍后阅读:\n",
377348
"\n",
378349
"神经网络包包含了各种用来构成深度神经网络构建块的模块和损失函数,一份完整的文档查看[这里](https://pytorch.org/docs/nn)\n",
@@ -396,9 +367,7 @@
396367
{
397368
"cell_type": "code",
398369
"execution_count": 9,
399-
"metadata": {
400-
"collapsed": true
401-
},
370+
"metadata": {},
402371
"outputs": [],
403372
"source": [
404373
"learning_rate = 0.01\n",
@@ -410,15 +379,13 @@
410379
"cell_type": "markdown",
411380
"metadata": {},
412381
"source": [
413-
"然而,当你使用神经网络是,你想要使用各种不同的更新规则,比如`SGD,Nesterov-SGD`,`Adam`, `RMSPROP`等.为了能做到这一点,我们构建了一个包`torch.optim`实现了所有的这些规则.使用他们非常简单:"
382+
"然而,当你使用神经网络是,你想要使用各种不同的更新规则,比如``SGD``,``Nesterov-SGD``,``Adam``, ``RMSPROP``等.为了能做到这一点,我们构建了一个包``torch.optim``实现了所有的这些规则.使用他们非常简单"
414383
]
415384
},
416385
{
417386
"cell_type": "code",
418387
"execution_count": 10,
419-
"metadata": {
420-
"collapsed": true
421-
},
388+
"metadata": {},
422389
"outputs": [],
423390
"source": [
424391
"import torch.optim as optim\n",
@@ -442,17 +409,6 @@
442409
"\n",
443410
"观察如何使用`optimizer.zero_grad()`手动将梯度缓冲区设置为零。 这是因为梯度是反向传播部分中的说明那样是累积的。"
444411
]
445-
},
446-
{
447-
"cell_type": "markdown",
448-
"metadata": {
449-
"collapsed": true
450-
},
451-
"source": [
452-
"本章的官方代码:\n",
453-
"* Python:[neural_networks_tutorial.py](download/neural_networks_tutorial.py)\n",
454-
"* Jupyter notebook:[neural_networks_tutorial.ipynb](download/neural_networks_tutorial.ipynb)"
455-
]
456412
}
457413
],
458414
"metadata": {
@@ -471,7 +427,7 @@
471427
"name": "python",
472428
"nbconvert_exporter": "python",
473429
"pygments_lexer": "ipython3",
474-
"version": "3.6.2"
430+
"version": "3.7.6"
475431
}
476432
},
477433
"nbformat": 4,

0 commit comments

Comments
 (0)