You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -207,13 +207,13 @@ Those functions must be hard to define, right?
207
207
208
208
Not at all, let start with the ReLU definition (one of the most widely used activation function):
209
209
210
-
$$
210
+
{$$}
211
211
\text{ReLU}(x) = \max({0, x})
212
-
$$
212
+
{/$$}
213
213
214
214
Easy peasy, the result is the maximum value of zero and the input.
215
215
216
-
```
216
+
```py
217
217
ax = plt.gca()
218
218
219
219
plt.plot(
@@ -231,11 +231,13 @@ The sigmoid is useful when you need to make a binary decision/classification (an
231
231
232
232
It is defined as:
233
233
234
-
$$\text{Sigmoid}(x) = \frac{1}{1+e^{-x}}$$
234
+
{$$}
235
+
\text{Sigmoid}(x) = \frac{1}{1+e^{-x}}
236
+
{/$$}
235
237
236
238
The sigmoid squishes the input values between 0 and 1. But in a super kind of way:
237
239
238
-
```
240
+
```py
239
241
ax = plt.gca()
240
242
241
243
plt.plot(
@@ -251,7 +253,7 @@ ax.set_ylim([-1.5, 1.5]);
251
253
252
254
With the model in place, we need to find parameters that predict will it rain tomorrow. First, we need something to tell us how good we're currently doing:
253
255
254
-
```
256
+
```py
255
257
criterion = nn.BCELoss()
256
258
```
257
259
@@ -269,7 +271,7 @@ Contrary to what you might believe, optimization in Deep Learning is just satisf
269
271
270
272
While there are tons of optimizers you can choose from, [Adam](https://pytorch.org/docs/stable/optim.html#torch.optim.Adam) is a safe first choice. PyTorch has a well-debugged implementation you can use:
@@ -305,7 +307,7 @@ We start by checking whether or not a CUDA device is available. Then, we transfe
305
307
306
308
Having a loss function is great, but tracking the accuracy of our model is something easier to understand, for us mere mortals. Here's the definition for our accuracy:
@@ -315,7 +317,7 @@ We convert every value below 0.5 to 0. Otherwise, we set it to 1. Finally, we ca
315
317
316
318
With all the pieces of the puzzle in place, we can start training our model:
317
319
318
-
```
320
+
```py
319
321
defround_tensor(t, decimal_places=3):
320
322
returnround(t.item(), decimal_places)
321
323
@@ -398,18 +400,15 @@ What about that accuracy? 83.6% accuracy on the test set sounds reasonable, righ
398
400
399
401
Training a good model can take a lot of time. And I mean weeks, months or even years. So, let's make sure that you know how you can save your precious work. Saving is easy:
400
402
401
-
```
403
+
```py
402
404
MODEL_PATH='model.pth'
403
405
404
406
torch.save(net, MODEL_PATH)
405
407
```
406
408
407
-
/usr/local/lib/python3.6/dist-packages/torch/serialization.py:360: UserWarning: Couldn't retrieve source code for container of type Net. It won't be checked for correctness upon loading.
408
-
"type " + obj.__name__ + ". It won't be checked "
409
-
410
409
Restoring your model is easy too:
411
410
412
-
```
411
+
```py
413
412
net = torch.load(MODEL_PATH)
414
413
```
415
414
@@ -421,7 +420,7 @@ Using just accuracy wouldn't be a good way to do it. Recall that our data contai
421
420
422
421
One way to delve a bit deeper into your model performance is to assess the precision and recall for each class. In our case, that will be _no rain_ and _rain_:
423
422
424
-
```
423
+
```py
425
424
classes = ['No rain', 'Raining']
426
425
427
426
y_pred = net(X_test)
@@ -447,7 +446,7 @@ You can see that our model is doing good when it comes to the _No rain_ class. W
447
446
448
447
One of the best things about binary classification is that you can have a good look at a simple confusion matrix:
0 commit comments