-
Notifications
You must be signed in to change notification settings - Fork 7.6k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
69 changed files
with
4,306 additions
and
1 deletion.
There are no files selected for viewing
Binary file added
BIN
+442 KB
大三上/人工神经网络/hw/2021/HW1_MNIST_Digit_Classification_with_MLP/HW1/HW1.pdf
Binary file not shown.
Binary file added
BIN
+195 KB
...人工神经网络/hw/2021/HW1_MNIST_Digit_Classification_with_MLP/HW1/Python and Numpy Tutorials.pdf
Binary file not shown.
222 changes: 222 additions & 0 deletions
222
大三上/人工神经网络/hw/2021/HW1_MNIST_Digit_Classification_with_MLP/HW1/Report.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,222 @@ | ||
# MNIST Digit Classification with MLP | ||
|
||
## 参数选择 | ||
|
||
为了方便比较,本次报告无特殊说明,均采用如下参数: | ||
|
||
```js | ||
config = { | ||
'learning_rate': 0.1, | ||
'weight_decay': 0, | ||
'momentum': 0, | ||
'batch_size': 100, | ||
'max_epoch': 100, | ||
'disp_freq': 300, | ||
'test_epoch': 1 | ||
} | ||
``` | ||
|
||
`learning_rate` 较大,一方面是因为实现的线性层中反向传播使用了平均而不是求和,常数上要乘以`1/batch_size`,另一方面经过测试这个`learning_rate`能保证快速收敛,并且相较于更小的`learning_rate`对训练拟合结果无明显差异。 | ||
|
||
Hinge Loss 的`margin`设为`0.5`,一开始由于使用默认的`5`导致了一些问题。 | ||
|
||
隐藏层大小选择为`100X100`,双隐藏层则再增加大小为`256X256`的一层。 | ||
|
||
优化 `run_mlp.py` 方便批量训练。 | ||
|
||
```bash | ||
> python3 run_mlp.py -h | ||
usage: run_mlp.py [-h] [--layer LAYER] [--batch BATCH] [--epoch EPOCH] [--activation ACTIVATION] | ||
[--loss LOSS] | ||
|
||
optional arguments: | ||
-h, --help show this help message and exit | ||
--layer LAYER count of hidden layers | ||
--batch BATCH count of hidden layers | ||
--epoch EPOCH count of hidden layers | ||
--activation ACTIVATION | ||
(R)Relu/(S)Sigmoid/(G)Gelu | ||
--loss LOSS E(EuclideanLoss)/S(SoftmaxCrossEntropyLoss)/H(HingeLoss) | ||
``` | ||
|
||
## Single Hidden Layer MLP | ||
|
||
### 训练 | ||
|
||
```bash | ||
> cat train.sh | ||
python3 run_mlp.py --activation G --loss E | ||
python3 run_mlp.py --activation G --loss S | ||
python3 run_mlp.py --activation G --loss H | ||
python3 run_mlp.py --activation R --loss E | ||
python3 run_mlp.py --activation R --loss S | ||
python3 run_mlp.py --activation R --loss H | ||
python3 run_mlp.py --activation S --loss E | ||
python3 run_mlp.py --activation S --loss S | ||
python3 run_mlp.py --activation S --loss H | ||
> sh train.sh | ||
``` | ||
|
||
网络架构为 | ||
|
||
``` | ||
Image(768) | ||
Linear(768->100) | ||
Activation | ||
Linear(100->10) | ||
Loss | ||
``` | ||
|
||
### 数据 | ||
|
||
#### 比较表格 | ||
|
||
##### 训练集准确率(%) | ||
|
||
| 激活函数\损失函数 | EuclideanLoss | **SoftmaxCrossEntropyLoss** | HingeLoss | | ||
| ----------------- | ------------------ | --------------------------- | ---------------------- | | ||
| Relu | 0.9836333333333335 | **0.9999333333333333** | 0.9635999999999999 | | ||
| Sigmoid | 0.9565333333333332 | 0.9887999999999999 | 0.9936333333333335 | | ||
| Gelu | 0.9800666666666666 | **0.9998** | **0.9978666666666667** | | ||
|
||
##### 训练集Loss | ||
|
||
| 激活函数\损失函数 | EuclideanLoss | **SoftmaxCrossEntropyLoss** | HingeLoss | | ||
| ----------------- | -------------------- | --------------------------- | ------------------- | | ||
| Relu | 0.03245183104056564 | 0.003718301291387246 | 0.1826615979806519 | | ||
| Sigmoid | 0.051790117122938926 | 0.04478277936694891 | 0.04405447016171317 | | ||
| Gelu | 0.03721945680266789 | 0.004468202671948444 | 0.18879977064320258 | | ||
|
||
##### 测试集准确率(%) | ||
|
||
| 激活函数\损失函数 | EuclideanLoss | **SoftmaxCrossEntropyLoss** | HingeLoss | | ||
| ----------------- | ------------------ | --------------------------- | ------------------ | | ||
| Relu | 0.9727000000000001 | **0.978** | 0.9533000000000001 | | ||
| Sigmoid | 0.9547000000000003 | 0.9749000000000001 | 0.9731000000000002 | | ||
| Gelu | 0.9726 | **0.9770000000000002** | 0.9714999999999999 | | ||
|
||
##### 测试集Loss | ||
|
||
| 激活函数\损失函数 | EuclideanLoss | **SoftmaxCrossEntropyLoss** | HingeLoss | | ||
| ----------------- | -------------------- | --------------------------- | ------------------- | | ||
| Relu | 0.0443127187212732 | 0.08022679303687924 | 0.2231040125306122 | | ||
| Sigmoid | 0.053668094819627095 | 0.07873228832396637 | 0.06507242253475932 | | ||
| Gelu | 0.043198286610261684 | 0.09227399562407555 | 0.22495735127343672 | | ||
|
||
#### 训练图表 | ||
|
||
##### fc_gelu_fc_Euclidean | ||
|
||
![fc_gelu_fc_Euclidean](https://i.loli.net/2021/10/04/sopf97UC8hMkEBa.png) | ||
|
||
##### fc_gelu_fc_Hinge | ||
|
||
![fc_gelu_fc_Hinge](https://i.loli.net/2021/10/04/Vmlsv8tUoq9g1Gc.png) | ||
|
||
##### fc_gelu_fc_SoftmaxCrossEntropy | ||
|
||
![fc_gelu_fc_SoftmaxCrossEntropy](https://i.loli.net/2021/10/04/R2pCxZhDMKQaiOA.png) | ||
|
||
##### fc_relu_fc_Euclidean | ||
|
||
![fc_relu_fc_Euclidean](https://i.loli.net/2021/10/04/8MX2yLafoiVrsnk.png) | ||
|
||
##### fc_relu_fc_Hinge | ||
|
||
![fc_relu_fc_Hinge](https://i.loli.net/2021/10/04/42y5wIGO7DrUKpd.png) | ||
|
||
##### fc_relu_fc_SoftmaxCrossEntropy | ||
|
||
![fc_relu_fc_SoftmaxCrossEntropy](https://i.loli.net/2021/10/04/CkJKdymoOlEbsYg.png) | ||
|
||
##### fc_sigmoid_fc_Euclidean | ||
|
||
![fc_sigmoid_fc_Euclidean](https://i.loli.net/2021/10/04/svEicpb9LHDPqSe.png) | ||
|
||
##### fc_sigmoid_fc_Hinge | ||
|
||
![fc_sigmoid_fc_Hinge](https://i.loli.net/2021/10/04/LrkhlWMXHeIyDg1.png) | ||
|
||
##### fc_sigmoid_fc_SoftmaxCrossEntropy | ||
|
||
![fc_sigmoid_fc_SoftmaxCrossEntropy](https://i.loli.net/2021/10/04/khnpSutHLKbPB5M.png) | ||
|
||
### 结果分析 | ||
|
||
#### 计算效率 | ||
|
||
从数学推导和运行实际可以得出不同函数的计算效率(速度): | ||
|
||
- 激活函数:$Gelu > Sigmoid > Relu$ | ||
- 损失函数:$Hinge Loss \ge Softmax CrossEntropy > Euclidean Loss$ | ||
|
||
他们速度的差异是常数倍的。 | ||
|
||
#### 分类效果 | ||
|
||
从最终的准确率与Loss结果和图像可以得出在 MNIST Digit Classification 这一工作中不同函数的分类效果: | ||
|
||
- 激活函数:$Gelu > Relu > Sigmoid$ | ||
- 损失函数:$Softmax CrossEntropy \approx Hinge Loss > Euclidean Loss$ | ||
|
||
整体上说都能在单隐藏层下取得较好成绩。 | ||
|
||
无明显过拟合现象。 | ||
|
||
#### 训练速度 | ||
|
||
即收敛到稳定值需要的轮数,我认为由于每个函数的实现有区别,实际的 `learning_rate` 并不相同,因此无法比较不同函数在这一项上的区别。 | ||
|
||
#### 抖动 | ||
|
||
部分图像收敛后存在抖动应该也是由实际 `learning_rate` 过大引起。 | ||
|
||
另一方面`Gelu`和`Hinge`对变化比较敏感,这很可能是`fc_gelu_fc_Hinge`抖动最为明显的原因。 | ||
|
||
## Two Hidden Layer MLP | ||
|
||
选择单隐藏层中结果最好的 `Relu` 与 `SoftmaxCrossEntropy` 组合进行双隐藏层测试。 | ||
|
||
```python | ||
python3 run_mlp.py --activation R --loss S --layer 2 | ||
``` | ||
|
||
网络架构为 | ||
|
||
``` | ||
Image(768) | ||
Linear(768->256) | ||
Relu | ||
Linear(256->100) | ||
Relu | ||
Linear(100->10) | ||
SoftmaxCrossEntropy | ||
``` | ||
|
||
### 数据 | ||
|
||
#### 结果 | ||
|
||
``` | ||
fc_relu_fc_relu_fc_SoftmaxCrossEntropy | ||
training_acc:1.0 | ||
training_loss:0.00038202421591634135 | ||
testing_acc:0.9798999999999998 | ||
testing_loss:0.09982166265407472 | ||
``` | ||
|
||
#### 训练图表 | ||
|
||
##### fc_relu_fc_relu_fc_SoftmaxCrossEntropy | ||
|
||
![fc_relu_fc_relu_fc_SoftmaxCrossEntropy](https://i.loli.net/2021/10/04/L8fBeIETkanYAqS.png) | ||
|
||
### 结果分析 | ||
|
||
使用双隐藏层无明显增强(相比单隐藏层增强了0.19%,这可能和单隐藏层的准确率已接近饱和有关),也不存在过拟合的现象。但是明显感知模型训练和收敛速度变慢,因此该任务更适合使用单隐藏层。 | ||
|
||
## 总结 | ||
|
||
本次试验需要手推反向传播公式并正确使用,并直接使用它们构建神经网络,加深了我对全连接层、各种激活函数与损失函数的理解。 | ||
|
||
同时不同的对比调参和隐藏层数的选择让我对选择合适的神经网络这一命题有了初步的认识。 |
158 changes: 158 additions & 0 deletions
158
...lassification_with_MLP_and_CNN_/HW2/Cifar-10 Classification with MLP and CNN.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,158 @@ | ||
# Cifar-10 Classification with MLP and CNN | ||
|
||
## self.training | ||
|
||
self.training 作为标记在 `nn.Module`中使用,在测试集中不需要使用Dropout层,这样可以不损失训练的信息;同时BN层需要使用使用动量训练出的参数,一方面利用训练信息,另一方面测试集不能用于训练。 | ||
|
||
## 实验过程 | ||
|
||
在BN层设置`weight=1,bias=0,momentum=1e-2,eps=1e-5` 其他参数保持默认(简单修改发现默认参数能保持较好的效果,例如drop_posibility提高会导致结果下降),下面是实验模型构成和结果。运行参数均为`python3 main.py --num_epochs 50`。其中MLP模型验证集准确度为54.08%,CNN模型验证集准确度为67.15%。 | ||
|
||
### MLP | ||
|
||
``` | ||
Model( | ||
(linear1): Linear(in_features=3072, out_features=512, bias=True) | ||
(bn): BatchNorm1d() | ||
(relu): ReLU() | ||
(dropout): Dropout() | ||
(linear2): Linear(in_features=512, out_features=10, bias=True) | ||
(loss): CrossEntropyLoss() | ||
) | ||
Epoch 50 of 50 took 1.428589105606079s | ||
learning rate: 0.001 | ||
training loss: 1.0290685239434243 | ||
training accuracy: 0.639924985691905 | ||
validation loss: 2.0886429595947265 | ||
validation accuracy: 0.5205999863147736 | ||
best epoch: 49 | ||
best validation accuracy: 0.5408999845385551 | ||
test loss: 1.8274000895023346 | ||
test accuracy: 0.5300999864935875 | ||
``` | ||
|
||
![result12](https://i.loli.net/2021/10/19/6Cgzc8hEfw5Vdrs.png) | ||
|
||
|
||
|
||
## 实验结果 | ||
|
||
### CNN | ||
|
||
``` | ||
Model( | ||
(conv1): Conv2d(3, 8, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) | ||
(bn1): BatchNorm2d(8, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) | ||
(relu1): ReLU() | ||
(dropout1): Dropout() | ||
(maxpool1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) | ||
(conv2): Conv2d(8, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) | ||
(bn2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) | ||
(relu2): ReLU() | ||
(dropout2): Dropout() | ||
(maxpool2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) | ||
(linear): Linear(in_features=1024, out_features=10, bias=True) | ||
(loss): CrossEntropyLoss() | ||
) | ||
Epoch 50 of 50 took 1.792809009552002s | ||
learning rate: 0.0009990002500000002 | ||
training loss: 0.9676271015405655 | ||
training accuracy: 0.6617249843478202 | ||
validation loss: 1.1218822890520095 | ||
validation accuracy: 0.6525999861955643 | ||
best epoch: 45 | ||
best validation accuracy: 0.6714999842643737 | ||
test loss: 1.054015880227089 | ||
test accuracy: 0.6668999826908112 | ||
``` | ||
|
||
![result11](https://i.loli.net/2021/10/19/HRmXW4KdbUpaOct.png) | ||
|
||
## 训练集和验证集的差异 | ||
|
||
本质上验证集和训练集的数据有差异,不考虑超参数,模型只可能学会训练集的特征,如果验证集的 | ||
|
||
特征有所差异,则会出现训练集拟合后,验证集的准确度相对较低的情况,甚至出现过拟合。 | ||
|
||
在选择超参数时,我会尽量选择训练集和验证集的准确度相对较高、且差值相对较小的情况。 | ||
|
||
## 实验结果 | ||
|
||
MLP模型验证集准确度为54.08%,CNN模型验证集准确度为67.15%。 | ||
|
||
CNN的表现更优,可能是因为CNN的卷积结构更适合提取图像特征。但两者差异不大,我猜测是由于CNN中卷积核数较少(无池化层,需要能够直接被全连接层接受)导致的。 | ||
|
||
另外注意到CNN模型的训练集过拟合相比而言严重一些(训练集准确度和验证集准确度相差15%),这可能是由于CNN模型隐藏层参数更多导致的。 | ||
|
||
两者网络都较为简单、参数相对较少,因此训练速度无太大差异。 | ||
|
||
## Without BN | ||
|
||
删除 MLP 的 `BatchNorm1d` 层和 CNN 的 `BatchNorm2d` 层,其他参数不变,和前面的实验结果进行对比。BatchNorm 主要可以防止梯度消失和梯度爆炸的问题,从结果上看在本实验中影响较小。 | ||
|
||
### MLP | ||
|
||
验证集准确率为53.28%,比使用 `BatchNorm1d` 层的略好一点,从实验结果来说BN层用处不大。 | ||
|
||
![result3](https://i.loli.net/2021/10/19/N2ZlMsj8QDSCFo4.png) | ||
|
||
### CNN | ||
|
||
验证集准确率为64.11%,比使用 `BatchNorm2d` 层的准确度低了3%,从实验结果来说BN层对CNN模型有一定的增益效果。 | ||
|
||
![result4](https://i.loli.net/2021/10/19/aBIJ2TU1hX3grp7.png) | ||
|
||
## Without Dropout | ||
|
||
删除 MLP 和 CNN 的 `Dropout` 层,其他参数不变,和前面的实验结果进行对比。Dropout 层能通过主动遗忘,让网络学到更多的局部特征。 | ||
|
||
### MLP | ||
|
||
MLP模型验证集准确度为52.84%,比使用Dropout层的准确度低了1.5%。 | ||
|
||
![result13](https://i.loli.net/2021/10/19/oYUuahs1gqS82Ft.png) | ||
|
||
### CNN | ||
|
||
CNN模型验证集准确度为66.16%,比使用Dropout层的准确度低了1%。 | ||
|
||
![result1](https://i.loli.net/2021/10/19/W8tL7EObvNQCKFV.png) | ||
|
||
整体而言,缺少BN层和Dropout层会使准确率略微降低。 | ||
|
||
## 对超参数的观测 | ||
|
||
### dropout rate | ||
|
||
实验 `dropout rate` 对结果的影响,在CNN模型下测试 `dropout rate=0.0,0.2,0.4,0.6,0.8,1.0`的情况,其他条件与基础实验相同。 | ||
|
||
| dropout rate | best validation accuracy | | ||
| ------------------ | ------------------------ | | ||
| 0.0 | 64.98% | | ||
| 0.2 | 66.63% | | ||
| 0.4 | 65.57% | | ||
| 0.6 | 60.81% | | ||
| 0.8 | 47.21% | | ||
| 1.0 (cannot learn) | 10.07% | | ||
|
||
可以看出 `dropout rate` 在较低时能增加少量准确率,但较高时会对结果有较大的负面影响。 | ||
|
||
### batch size | ||
|
||
实验 `batch size` 对结果的影响,在CNN模型下测试 `batch size=10,20,50,100,200,500`的情况,`training epoch` 设置为20,其他条件与基础实验相同。 | ||
|
||
| batch size | best validation accuracy | | ||
| ---------- | ------------------------ | | ||
| 10 | 64.69% | | ||
| 50 | 64.47% | | ||
| 100 | 64.48% | | ||
| 1000 | 61.62% | | ||
| 10000 | 47.59% | | ||
|
||
发现 `batch size` 大小与实验速度成反比(但每轮速度存在最小瓶颈,大约为1s),并且说明在合理范围内,增大 `batch size` 有助于利用内存与并行化优势提速。但过大的`batch size` 会使内存占用率提高,并且由于每轮迭代次数变少,结果收敛更慢。 | ||
|
||
## 参考 | ||
|
||
在实验过程中参考了pytorch 的源码,发现底层算子是用C++实现的。 | ||
|
Binary file added
BIN
+229 KB
大三上/人工神经网络/hw/2021/HW2_Cifar-10_Classification_with_MLP_and_CNN_/HW2/HW2.pdf
Binary file not shown.
Oops, something went wrong.