# fendouai/pytorch1.0-cn

PyTorch 1.0 官方文档 中文版，欢迎关注微信公众号：磐创AI
Latest commit 01c369a Apr 20, 2019
Type Name Latest commit message Commit time
Failed to load latest commit information.
neural-networks.md Apr 20, 2019
optional-data-parallelism.md Apr 20, 2019
training-a-classifier.md Apr 20, 2019
what-is-pytorch.md Apr 20, 2019

## pytorch1.0-cn

pytorch1.0官方文档 中文版

## PytorchChina:

http://pytorchchina.com

# 什么是 PyTorch?

PyTorch 是一个基于 Python 的科学计算包，主要定位两类人群：
• NumPy 的替代品，可以利用 GPU 的性能进行计算。
• 深度学习研究平台拥有足够的灵活性和速度

## 开始学习

### Tensors (张量)

Tensors 类似于 NumPy 的 ndarrays ，同时  Tensors 可以使用 GPU 进行计算。
```from __future__ import print_function
import torch```

```x = torch.empty(5, 3)
print(x)```

```tensor(1.00000e-04 *
[[-0.0000,  0.0000,  1.5135],
[ 0.0000,  0.0000,  0.0000],
[ 0.0000,  0.0000,  0.0000],
[ 0.0000,  0.0000,  0.0000],
[ 0.0000,  0.0000,  0.0000]])```

```x = torch.rand(5, 3)
print(x)```

```tensor([[ 0.6291,  0.2581,  0.6414],
[ 0.9739,  0.8243,  0.2276],
[ 0.4184,  0.1815,  0.5131],
[ 0.5533,  0.5440,  0.0718],
[ 0.2908,  0.1850,  0.5297]])```

Construct a matrix filled zeros and of dtype long:

```x = torch.zeros(5, 3, dtype=torch.long)
print(x)```

```tensor([[ 0,  0,  0],
[ 0,  0,  0],
[ 0,  0,  0],
[ 0,  0,  0],
[ 0,  0,  0]])```

```x = torch.tensor([5.5, 3])
print(x)```

`tensor([ 5.5000,  3.0000])`

```x = x.new_ones(5, 3, dtype=torch.double)
# new_* methods take in sizes
print(x)
x = torch.randn_like(x, dtype=torch.float)
# override dtype!
print(x)
# result has the same size```

```tensor([[ 1.,  1.,  1.],
[ 1.,  1.,  1.],
[ 1.,  1.,  1.],
[ 1.,  1.,  1.],
[ 1.,  1.,  1.]], dtype=torch.float64)
tensor([[-0.2183,  0.4477, -0.4053],
[ 1.7353, -0.0048,  1.2177],
[-1.1111,  1.0878,  0.9722],
[-0.7771, -0.2174,  0.0412],
[-2.1750,  1.3609, -0.3322]])```

`print(x.size())`

`torch.Size([5, 3])`

`torch.Size`  是一个元组，所以它支持左右的元组操作。

### 操作

```y = torch.rand(5, 3)
print(x + y)```

Out:

```tensor([[-0.1859,  1.3970,  0.5236],
[ 2.3854,  0.0707,  2.1970],
[-0.3587,  1.2359,  1.8951],
[-0.1189, -0.1376,  0.4647],
[-1.8968,  2.0164,  0.1092]])```

`print(torch.add(x, y))`

Out:

```tensor([[-0.1859,  1.3970,  0.5236],
[ 2.3854,  0.0707,  2.1970],
[-0.3587,  1.2359,  1.8951],
[-0.1189, -0.1376,  0.4647],
[-1.8968,  2.0164,  0.1092]])```

```result = torch.empty(5, 3)
print(result)```

Out:

```tensor([[-0.1859,  1.3970,  0.5236],
[ 2.3854,  0.0707,  2.1970],
[-0.3587,  1.2359,  1.8951],
[-0.1189, -0.1376,  0.4647],
[-1.8968,  2.0164,  0.1092]])```

```# adds x to y
print(y)```

Out:

```tensor([[-0.1859,  1.3970,  0.5236],
[ 2.3854,  0.0707,  2.1970],
[-0.3587,  1.2359,  1.8951],
[-0.1189, -0.1376,  0.4647],
[-1.8968,  2.0164,  0.1092]])```

Note

`print(x[:, 1])`

Out:

`tensor([ 0.4477, -0.0048,  1.0878, -0.2174,  1.3609])`

```x = torch.randn(4, 4)
y = x.view(16)
z = x.view(-1, 8)  # the size -1 is inferred from other dimensions
print(x.size(), y.size(), z.size())```

Out:

`torch.Size([4, 4]) torch.Size([16]) torch.Size([2, 8])`

```x = torch.randn(1)
print(x)
print(x.item())```

Out:

```tensor([ 0.9422])
0.9422121644020081```
PyTorch windows 安装教程：两行代码搞定 PyTorch 安装 http://pytorchchina.com/2018/12/11/pytorch-windows-install-1/

PyTorch Mac 安装教程 http://pytorchchina.com/2018/12/11/pytorch-mac-install/

PyTorch Linux 安装教程 http://pytorchchina.com/2018/12/11/pytorch-linux-install/

PyTorch QQ群

## PyTorch 自动微分【2】

1、TENSOR

`import torch`

```x = torch.ones(2, 2, requires_grad=True)
print(x)```

```tensor([[1., 1.],

```y = x + 2
print(y)```

```tensor([[3., 3.],
```
`print(y.grad_fn)`

```<AddBackward0 object at 0x7fe1db427470>
```

```z = y * y * 3
out = z.mean()
print(z, out)```

```tensor([[27., 27.],

`.requires_grad_( ... )` 会改变张量的 `requires_grad` 标记。输入的标记默认为  `False` ，如果没有提供相应的参数。

```a = torch.randn(2, 2)
a = ((a * 3) / (a - 1))
b = (a * a).sum()

```False
True
<SumBackward0 object at 0x7fe1db427dd8>```

`out.backward()`

`print(x.grad)`
输出：
```tensor([[4.5000, 4.5000],
[4.5000, 4.5000]])```

```x = torch.randn(3, requires_grad=True)
y = x  2
while y.data.norm() < 1000:
y = y  2
print(y)```

`tensor([ -444.6791,   762.9810, -1690.0941], grad_fn=<MulBackward0>)`

```v = torch.tensor([0.1, 1.0, 0.0001], dtype=torch.float)
y.backward(v)

`tensor([1.0240e+02, 1.0240e+03, 1.0240e-01])`

```print(x.requires_grad)

```True
True
False```

`autograd` 和 `Function` 的文档在： https://pytorch.org/docs/autograd

## PyTorch 神经网络【3】

1.定义一个包含可训练参数的神经网络

2.迭代整个输入

3.通过神经网络处理输入

4.计算损失(loss)

5.反向传播梯度到神经网络的参数

6.更新网络的参数，典型的用一个简单的更新方法：weight = weight - learning_rate *gradient

```import torch
import torch.nn as nn
import torch.nn.functional as F

class Net(nn.Module):

def __init__(self):
super(Net, self).__init__()
# 1 input image channel, 6 output channels, 5x5 square convolution
# kernel
self.conv1 = nn.Conv2d(1, 6, 5)
self.conv2 = nn.Conv2d(6, 16, 5)
# an affine operation: y = Wx + b
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)

def forward(self, x):
# Max pooling over a (2, 2) window
x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
# If the size is a square you can only specify a single number
x = F.max_pool2d(F.relu(self.conv2(x)), 2)
x = x.view(-1, self.num_flat_features(x))
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x

def num_flat_features(self, x):
size = x.size()[1:]  # all dimensions except the batch dimension
num_features = 1
for s in size:
num_features *= s
return num_features

net = Net()
print(net)```

```Net(
(conv1): Conv2d(1, 6, kernel_size=(5, 5), stride=(1, 1))
(conv2): Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1))
(fc1): Linear(in_features=400, out_features=120, bias=True)
(fc2): Linear(in_features=120, out_features=84, bias=True)
(fc3): Linear(in_features=84, out_features=10, bias=True)
)```

```params = list(net.parameters())
print(len(params))
print(params[0].size())  # conv1's .weight```

```10
torch.Size([6, 1, 5, 5])```

```input = torch.randn(1, 1, 32, 32)
out = net(input)
print(out)```

```tensor([[-0.0233,  0.0159, -0.0249,  0.1413,  0.0663,  0.0297, -0.0940, -0.0135,

```net.zero_grad()
out.backward(torch.randn(1, 10))```

torch.Tensor - A multi-dimensional array with support for autograd operations like backward(). Also holds the gradient w.r.t. the tensor. nn.Module - Neural network module. Convenient way of encapsulating parameters, with helpers for moving them to GPU, exporting, loading, etc. nn.Parameter - A kind of Tensor, that is automatically registered as a parameter when assigned as an attribute to a Module. autograd.Function - Implements forward and backward definitions of an autograd operation. Every Tensor operation, creates at least a single Function node, that connects to functions that created a Tensor and encodes its history.

1.定义一个神经网络

2.处理输入以及调用反向传播

1.计算损失值

2.更新网络中的权重

```output = net(input)
target = torch.randn(10)  # a dummy target, for example
target = target.view(1, -1)  # make it the same shape as output
criterion = nn.MSELoss()

loss = criterion(output, target)
print(loss)```

`tensor(1.3389, grad_fn=<MseLossBackward>)`

```input -> conv2d -> relu -> maxpool2d -> conv2d -> relu -> maxpool2d
-> view -> linear -> relu -> linear -> relu -> linear
-> MSELoss
-> loss```

```print(loss.grad_fn)  # MSELoss

```<MseLossBackward object at 0x7fab77615278>

```net.zero_grad()     # zeroes the gradient buffers of all parameters

loss.backward()

```conv1.bias.grad before backward
tensor([0., 0., 0., 0., 0., 0.])
tensor([-0.0054,  0.0011,  0.0012,  0.0148, -0.0186,  0.0087])```

`weight = weight - learning_rate * gradient`

```learning_rate = 0.01
for f in net.parameters():

```import torch.optim as optim
optimizer = optim.SGD(net.parameters(), lr=0.01)
output = net(input)
loss = criterion(output, target)
loss.backward()
optimizer.step()    # Does the update```

neural_networks_tutorial.py

neural_networks_tutorial.ipynb

## PyTorch 图像分类器【4】

### 现在你也许会想应该怎么处理数据？

• 对于图像，可以用 Pillow，OpenCV
• 对于语音，可以用 scipy，librosa
• 对于文本，可以直接用 Python 或 Cython 基础数据加载模块，或者用 NLTK 和 SpaCy

### 训练一个图像分类器

1. 使用torchvision加载并且归一化CIFAR10的训练和测试数据集
2. 定义一个卷积神经网络
3. 定义一个损失函数
4. 在训练样本数据上训练网络
5. 在测试样本数据上测试网络

```import torch
import torchvision
import torchvision.transforms as transforms```
torchvision 数据集的输出是范围在[0,1]之间的 PILImage，我们将他们转换成归一化范围为[-1,1]之间的张量 Tensors。
```transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
shuffle=True, num_workers=2)
testset = torchvision.datasets.CIFAR10(root='./data', train=False,
shuffle=False, num_workers=2)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')```

```Downloading https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz to ./data/cifar-10-python.tar.gz

```import matplotlib.pyplot as plt
import numpy as np

# functions to show an image

def imshow(img):
img = img / 2 + 0.5     # unnormalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
plt.show()

# get some random training images
images, labels = dataiter.next()

# show images
imshow(torchvision.utils.make_grid(images))
# print labels
print(' '.join('%5s' % classes[labels[j]] for j in range(4)))```

```cat plane  ship  frog
```

```import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def init(self):
super(Net, self).init()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16  5  5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
```<span class="k">def</span> <span class="nf">forward</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">x</span><span class="p">):</span>
<span class="n">x</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">pool</span><span class="p">(</span><span class="n">F</span><span class="o">.</span><span class="n">relu</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">conv1</span><span class="p">(</span><span class="n">x</span><span class="p">)))</span>
<span class="n">x</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">pool</span><span class="p">(</span><span class="n">F</span><span class="o">.</span><span class="n">relu</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">conv2</span><span class="p">(</span><span class="n">x</span><span class="p">)))</span>
<span class="n">x</span> <span class="o">=</span> <span class="n">x</span><span class="o">.</span><span class="n">view</span><span class="p">(</span><span class="o">-</span><span class="mi">1</span><span class="p">,</span> <span class="mi">16</span> <span class="o">*</span> <span class="mi">5</span> <span class="o">*</span> <span class="mi">5</span><span class="p">)</span>
<span class="n">x</span> <span class="o">=</span> <span class="n">F</span><span class="o">.</span><span class="n">relu</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">fc1</span><span class="p">(</span><span class="n">x</span><span class="p">))</span>
<span class="n">x</span> <span class="o">=</span> <span class="n">F</span><span class="o">.</span><span class="n">relu</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">fc2</span><span class="p">(</span><span class="n">x</span><span class="p">))</span>
<span class="n">x</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">fc3</span><span class="p">(</span><span class="n">x</span><span class="p">)</span>
<span class="k">return</span> <span class="n">x</span>
```
net = Net()```

```import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)```

```for epoch in range(2):  # loop over the dataset multiple times
```<span class="n">running_loss</span> <span class="o">=</span> <span class="mf">0.0</span>
<span class="k">for</span> <span class="n">i</span><span class="p">,</span> <span class="n">data</span> <span class="ow">in</span> <span class="nb">enumerate</span><span class="p">(</span><span class="n">trainloader</span><span class="p">,</span> <span class="mi">0</span><span class="p">):</span>
<span class="c1"># get the inputs</span>
<span class="n">inputs</span><span class="p">,</span> <span class="n">labels</span> <span class="o">=</span> <span class="n">data</span>

<span class="c1"># zero the parameter gradients</span>

<span class="c1"># forward + backward + optimize</span>
<span class="n">outputs</span> <span class="o">=</span> <span class="n">net</span><span class="p">(</span><span class="n">inputs</span><span class="p">)</span>
<span class="n">loss</span> <span class="o">=</span> <span class="n">criterion</span><span class="p">(</span><span class="n">outputs</span><span class="p">,</span> <span class="n">labels</span><span class="p">)</span>
<span class="n">loss</span><span class="o">.</span><span class="n">backward</span><span class="p">()</span>
<span class="n">optimizer</span><span class="o">.</span><span class="n">step</span><span class="p">()</span>

<span class="c1"># print statistics</span>
<span class="n">running_loss</span> <span class="o">+=</span> <span class="n">loss</span><span class="o">.</span><span class="n">item</span><span class="p">()</span>
<span class="k">if</span> <span class="n">i</span> <span class="o">%</span> <span class="mi">2000</span> <span class="o">==</span> <span class="mi">1999</span><span class="p">:</span>    <span class="c1"># print every 2000 mini-batches</span>
<span class="k">print</span><span class="p">(</span><span class="s1">'[</span><span class="si">%d</span><span class="s1">, </span><span class="si">%5d</span><span class="s1">] loss: </span><span class="si">%.3f</span><span class="s1">'</span> <span class="o">%</span>
<span class="p">(</span><span class="n">epoch</span> <span class="o">+</span> <span class="mi">1</span><span class="p">,</span> <span class="n">i</span> <span class="o">+</span> <span class="mi">1</span><span class="p">,</span> <span class="n">running_loss</span> <span class="o">/</span> <span class="mi">2000</span><span class="p">))</span>
<span class="n">running_loss</span> <span class="o">=</span> <span class="mf">0.0</span>
```
print('Finished Training')```

输出：
```[1,  2000] loss: 2.187
[1,  4000] loss: 1.852
[1,  6000] loss: 1.672
[1,  8000] loss: 1.566
[1, 10000] loss: 1.490
[1, 12000] loss: 1.461
[2,  2000] loss: 1.389
[2,  4000] loss: 1.364
[2,  6000] loss: 1.343
[2,  8000] loss: 1.318
[2, 10000] loss: 1.282
[2, 12000] loss: 1.286
Finished Training```

```GroundTruth:    cat  ship  ship plane
```

```outputs = net(images)
```

```_, predicted = torch.max(outputs, 1)
print('Predicted: ', ' '.join('%5s' % classes[predicted[j]]
for j in range(4)))```

```Predicted:    cat  ship   car  ship
```

```correct = 0
total = 0
images, labels = data
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))```

```Accuracy of the network on the 10000 test images: 54 %
```

```class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
images, labels = data
outputs = net(images)
_, predicted = torch.max(outputs, 1)
c = (predicted == labels).squeeze()
for i in range(4):
label = labels[i]
class_correct[label] += c[i].item()
class_total[label] += 1
for i in range(10):
print('Accuracy of %5s : %2d %%' % (
classes[i], 100 * class_correct[i] / class_total[i]))```

```Accuracy of plane : 57 %
Accuracy of   car : 73 %
Accuracy of  bird : 49 %
Accuracy of   cat : 54 %
Accuracy of  deer : 18 %
Accuracy of   dog : 20 %
Accuracy of  frog : 58 %
Accuracy of horse : 74 %
Accuracy of  ship : 70 %
Accuracy of truck : 66 %```

```device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

# Assume that we are on a CUDA machine, then this should print a CUDA device:

print(device)```

```cuda:0
```

```net.to(device)
```

```inputs, labels = inputs.to(device), labels.to(device)
```

• 深度理解了PyTorch的张量和神经网络
• 训练了一个小的神经网络来分类图像

http://pytorchchina.com/2018/12/11/optional-data-parallelism/

cifar10_tutorial.py

cifar10_tutorial.ipynb

## PyTorch 数据并行处理【5】

``` device = torch.device("cuda:0")
model.to(device)```

` mytensor = my_tensor.to(device)`

` model = nn.DataParallel(model)`

``` import torch
import torch.nn as nn

### 参数

``` input_size = 5
output_size = 2

batch_size = 30
data_size = 100```

`device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")`

```class RandomDataset(Dataset):

def __init__(self, size, length):
self.len = length
self.data = torch.randn(length, size)

def __getitem__(self, index):
return self.data[index]

def __len__(self):
return self.len

```class Model(nn.Module):
# Our model

def __init__(self, input_size, output_size):
super(Model, self).__init__()
self.fc = nn.Linear(input_size, output_size)

def forward(self, input):
output = self.fc(input)
print("\tIn Model: input size", input.size(),
"output size", output.size())

return output```

```model = Model(input_size, output_size)
if torch.cuda.device_count() > 1:
print("Let's use", torch.cuda.device_count(), "GPUs!")
# dim = 0 [30, xxx] -> [10, ...], [10, ...], [10, ...] on 3 GPUs
model = nn.DataParallel(model)

model.to(device)```

```Let's use 2 GPUs!
```
运行模型：

```for data in rand_loader:
input = data.to(device)
output = model(input)
print("Outside: input size", input.size(),
"output_size", output.size())```

```In Model: input size torch.Size([15, 5]) output size torch.Size([15, 2])
In Model: input size torch.Size([15, 5]) output size torch.Size([15, 2])
Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])
In Model: input size torch.Size([15, 5]) output size torch.Size([15, 2])
In Model: input size torch.Size([15, 5]) output size torch.Size([15, 2])
Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])
In Model: input size torch.Size([15, 5]) output size torch.Size([15, 2])
In Model: input size torch.Size([15, 5]) output size torch.Size([15, 2])
Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])
In Model: input size torch.Size([5, 5]) output size torch.Size([5, 2])
In Model: input size torch.Size([5, 5]) output size torch.Size([5, 2])
Outside: input size torch.Size([10, 5]) output_size torch.Size([10, 2])```

```# on 2 GPUs
Let's use 2 GPUs!
In Model: input size torch.Size([15, 5]) output size torch.Size([15, 2])
In Model: input size torch.Size([15, 5]) output size torch.Size([15, 2])
Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])
In Model: input size torch.Size([15, 5]) output size torch.Size([15, 2])
In Model: input size torch.Size([15, 5]) output size torch.Size([15, 2])
Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])
In Model: input size torch.Size([15, 5]) output size torch.Size([15, 2])
In Model: input size torch.Size([15, 5]) output size torch.Size([15, 2])
Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])
In Model: input size torch.Size([5, 5]) output size torch.Size([5, 2])
In Model: input size torch.Size([5, 5]) output size torch.Size([5, 2])
Outside: input size torch.Size([10, 5]) output_size torch.Size([10, 2])```

```Let's use 3 GPUs!
In Model: input size torch.Size([10, 5]) output size torch.Size([10, 2])
In Model: input size torch.Size([10, 5]) output size torch.Size([10, 2])
In Model: input size torch.Size([10, 5]) output size torch.Size([10, 2])
Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])
In Model: input size torch.Size([10, 5]) output size torch.Size([10, 2])
In Model: input size torch.Size([10, 5]) output size torch.Size([10, 2])
In Model: input size torch.Size([10, 5]) output size torch.Size([10, 2])
Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])
In Model: input size torch.Size([10, 5]) output size torch.Size([10, 2])
In Model: input size torch.Size([10, 5]) output size torch.Size([10, 2])
In Model: input size torch.Size([10, 5]) output size torch.Size([10, 2])
Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([2, 5]) output size torch.Size([2, 2])
Outside: input size torch.Size([10, 5]) output_size torch.Size([10, 2])```

```Let's use 8 GPUs!
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([2, 5]) output size torch.Size([2, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([2, 5]) output size torch.Size([2, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([2, 5]) output size torch.Size([2, 2])
Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])
In Model: input size torch.Size([2, 5]) output size torch.Size([2, 2])
In Model: input size torch.Size([2, 5]) output size torch.Size([2, 2])
In Model: input size torch.Size([2, 5]) output size torch.Size([2, 2])
In Model: input size torch.Size([2, 5]) output size torch.Size([2, 2])
In Model: input size torch.Size([2, 5]) output size torch.Size([2, 2])
Outside: input size torch.Size([10, 5]) output_size torch.Size([10, 2])```

## 总结

data_parallel_tutorial.py

data_parallel_tutorial.ipynb

# PytorchChina:

http://pytorchchina.com

You can’t perform that action at this time.