🏷️sec_softmax_concise
(Just as high-level APIs)
of deep learning frameworks
(made it much easier to implement linear regression)
in :numref:sec_linear_concise
,
(we will find it similarly) (here) (or possibly more)
convenient for implementing classification models. Let us stick with the Fashion-MNIST dataset
and keep the batch size at 256 as in :numref:sec_softmax_scratch
.
from d2l import mxnet as d2l
from mxnet import gluon, init, npx
from mxnet.gluon import nn
npx.set_np()
#@tab pytorch
from d2l import torch as d2l
import torch
from torch import nn
#@tab tensorflow
from d2l import tensorflow as d2l
import tensorflow as tf
#@tab all
batch_size = 256
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)
As mentioned in :numref:sec_softmax
,
[the output layer of softmax regression
is a fully-connected layer.]
Therefore, to implement our model,
we just need to add one fully-connected layer
with 10 outputs to our Sequential
.
Again, here, the Sequential
is not really necessary,
but we might as well form the habit since it will be ubiquitous
when implementing deep models.
Again, we initialize the weights at random
with zero mean and standard deviation 0.01.
net = nn.Sequential()
net.add(nn.Dense(10))
net.initialize(init.Normal(sigma=0.01))
#@tab pytorch
# PyTorch does not implicitly reshape the inputs. Thus we define the flatten
# layer to reshape the inputs before the linear layer in our network
net = nn.Sequential(nn.Flatten(), nn.Linear(784, 10))
def init_weights(m):
if type(m) == nn.Linear:
nn.init.normal_(m.weight, std=0.01)
net.apply(init_weights);
#@tab tensorflow
net = tf.keras.models.Sequential()
net.add(tf.keras.layers.Flatten(input_shape=(28, 28)))
weight_initializer = tf.keras.initializers.RandomNormal(mean=0.0, stddev=0.01)
net.add(tf.keras.layers.Dense(10, kernel_initializer=weight_initializer))
🏷️subsec_softmax-implementation-revisited
In the previous example of :numref:sec_softmax_scratch
,
we calculated our model's output
and then ran this output through the cross-entropy loss.
Mathematically, that is a perfectly reasonable thing to do.
However, from a computational perspective,
exponentiation can be a source of numerical stability issues.
Recall that the softmax function calculates
inf
(infinity)
and we wind up encountering either 0, inf
, or nan
(not a number) for
One trick to get around this is to first subtract
After the subtraction and normalization step,
it might be possible that some -inf
for nan
results.
Fortunately, we are saved by the fact that
even though we are computing exponential functions,
we ultimately intend to take their log
(when calculating the cross-entropy loss).
By combining these two operators
softmax and cross-entropy together,
we can escape the numerical stability issues
that might otherwise plague us during backpropagation.
As shown in the equation below, we avoid calculating
We will want to keep the conventional softmax function handy in case we ever want to evaluate the output probabilities by our model. But instead of passing softmax probabilities into our new loss function, we will just [pass the logits and compute the softmax and its log all at once inside the cross-entropy loss function,] which does smart things like the "LogSumExp trick".
loss = gluon.loss.SoftmaxCrossEntropyLoss()
#@tab pytorch
loss = nn.CrossEntropyLoss()
#@tab tensorflow
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
Here, we (use minibatch stochastic gradient descent) with a learning rate of 0.1 as the optimization algorithm. Note that this is the same as we applied in the linear regression example and it illustrates the general applicability of the optimizers.
trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': 0.1})
#@tab pytorch
trainer = torch.optim.SGD(net.parameters(), lr=0.1)
#@tab tensorflow
trainer = tf.keras.optimizers.SGD(learning_rate=.1)
Next we [call the training function defined] (earlier) in :numref:sec_softmax_scratch
to train the model.
#@tab all
num_epochs = 10
d2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, trainer)
As before, this algorithm converges to a solution that achieves a decent accuracy, albeit this time with fewer lines of code than before.
- Using high-level APIs, we can implement softmax regression much more concisely.
- From a computational perspective, implementing softmax regression has intricacies. Note that in many cases, a deep learning framework takes additional precautions beyond these most well-known tricks to ensure numerical stability, saving us from even more pitfalls that we would encounter if we tried to code all of our models from scratch in practice.
- Try adjusting the hyperparameters, such as the batch size, number of epochs, and learning rate, to see what the results are.
- Increase the number of epochs for training. Why might the test accuracy decrease after a while? How could we fix this?
:begin_tab:mxnet
Discussions
:end_tab:
:begin_tab:pytorch
Discussions
:end_tab:
:begin_tab:tensorflow
Discussions
:end_tab: